WorldWideScience

Sample records for variance trends proportional

  1. The influence of mean climate trends and climate variance on beaver survival and recruitment dynamics.

    Science.gov (United States)

    Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank

    2012-09-01

    Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for

  2. Genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations

    NARCIS (Netherlands)

    Mäki, K.; Groen, A.F.; Liinamo, A.E.; Ojala, M.

    2002-01-01

    The aims of this study were to assess genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations. The influence of time-dependent fixed effects in the model when estimating the genetic trends was also studied. Official hip and elbow dysplasia screening

  3. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  4. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  5. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  6. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    International Nuclear Information System (INIS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-01-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix

  7. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    Science.gov (United States)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  8. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  9. Putative golden proportions as predictors of facial esthetics in adolescents.

    Science.gov (United States)

    Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C

    2008-10-01

    In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.

  10. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  11. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  12. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  13. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  14. Mixed model with spatial variance-covariance structure for accommodating of local stationary trend and its influence on multi-environmental crop variety trial assessment

    Energy Technology Data Exchange (ETDEWEB)

    Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.

    2014-06-01

    The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)

  15. Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2009-07-01

    In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.

  16. Mean and variance evolutions of the hot and cold temperatures in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)

    2010-02-15

    In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)

  17. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  18. Computer simulation of gain fluctuations in proportional counters

    International Nuclear Information System (INIS)

    Demir, Nelgun; Tapan, . Ilhan

    2004-01-01

    A computer simulation code has been developed in order to examine the fluctuation in gas amplification in wire proportional counters which are common in detector applications in particle physics experiments. The magnitude of the variance in the gain dominates the statistical portion of the energy resolution. In order to compare simulation and experimental results, the gain and its variation has been calculated numerically for the well known Aleph Inner Tracking Detector geometry. The results show that the bias voltage has a strong influence on the variance in the gain. The simulation calculations are in good agreement with experimental results. (authors)

  19. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  20. Numerical simulation of variance of solar radiation and its influence on wheat growth

    Science.gov (United States)

    Zhang, Xuefen; Wang, Chunyi; Du, Zixuan; Zhai, Wei

    2007-09-01

    The growth of crops is directly related to solar radiation whose variances influence the photosynthesis of crops and the growth momentum thereof. This dissertation has Zhengzhou, which located in the Huanghuai Farmland Ecological System of China, as an example to analyze the rules of variances of total solar radiation, direct radiation and diffusive radiation. With the help of linear trend fitting, it is identified that total radiation (TR) drops as a whole at a rate of 1.6482J/m2. Such drop has been particularly apparent in recent years with a period of 7 to 16 years; diffusive radiation (DF) tends to increase at a rate of 15.149 J/m2 with a period of 20 years; direct radiation (DR) tends to drop at a rate of 15.843 J/m2 without apparent period. The total radiation has been on the decrease ever since 1980 during the growth period of wheat. Having modified relevant Parameter in the Carbon and Nitrogen Biogeochemistry in Agroecosystems Model (DNDC) model and simulated the influence of solar radiation variances on the development phase, leaf area index (LAI), grain weight, etc during the growth period of wheat, it is found that solar radiation is in positive proportion to LAI and grain weight (GRNWT) but not apparently related to development phase (DP). The change of total radiation delays the maximization of wheat LAI, reduces wheat LAI before winter but has no apparent effect in winter and decreases wheat LAI from jointing period to filling period; it has no apparent influence on grain formation at the early stage of grain formation, slows down the weight increase of grains during the filling period and accelerates the weight increase of grains at the end of filling period. Variance of radiations does not affect the DP of wheat much.

  1. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  2. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  3. Subjective neighborhood assessment and physical inactivity: An examination of neighborhood-level variance.

    Science.gov (United States)

    Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen

    2018-06-01

    Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  5. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  6. Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models

    DEFF Research Database (Denmark)

    Lanne, Markku; Nyberg, Henri

    We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...

  7. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  8. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  9. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  10. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    Science.gov (United States)

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  11. The Trend Odds Model for Ordinal Data‡

    Science.gov (United States)

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  12. The trend odds model for ordinal data.

    Science.gov (United States)

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Genetic factors explain half of all variance in serum eosinophil cationic protein

    DEFF Research Database (Denmark)

    Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie

    2014-01-01

    with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....

  14. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  15. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  16. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  17. Inverse sampled Bernoulli (ISB) procedure for estimating a population proportion, with nuclear material applications

    International Nuclear Information System (INIS)

    Wright, T.

    1982-01-01

    A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling

  18. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  19. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  20. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  1. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  2. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  3. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  4. Trend Analysis Using Microcomputers.

    Science.gov (United States)

    Berger, Carl F.

    A trend analysis statistical package and additional programs for the Apple microcomputer are presented. They illustrate strategies of data analysis suitable to the graphics and processing capabilities of the microcomputer. The programs analyze data sets using examples of: (1) analysis of variance with multiple linear regression; (2) exponential…

  5. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  6. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  7. Intercentre variance in patient reported outcomes is lower than objective rheumatoid arthritis activity measures

    DEFF Research Database (Denmark)

    Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena

    2017-01-01

    Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...

  8. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  9. Change-Point and Trend Analysis on Annual Maximum Discharge in Continental United States

    Science.gov (United States)

    Serinaldi, F.; Villarini, G.; Smith, J. A.; Krajewski, W. F.

    2008-12-01

    Annual maximum discharge records from 36 stations representing different hydro-climatic regimes in the continental United States with at least 100 years of records are used to investigate the presence of temporal trends and abrupt changes in mean and variance. Change point analysis is performed by means of two non- parametric (Pettitt and CUSUM), one semi-parametric (Guan), and two parametric (Rodionov and Bayesian Change Point) tests. Two non-parametric (Mann-Kendall and Spearman) and one parametric (Pearson) tests are applied to detect the presence of temporal trends. Generalized Additive Model for Location Scale and Shape (GAMLSS) models are also used to parametrically model the streamflow data exploiting their flexibility to account for changes and temporal trends in the parameters of distribution functions. Additionally, serial correlation is assessed in advance by computing the autocorrelation function (ACF), and the Hurst parameter is estimated using two estimators (aggregated variance and differenced variance methods) to investigate the presence of long range dependence. The results of this study indicate lack of long range dependence in the maximum streamflow series. At some stations the authors found a statistically significant change point in the mean and/or variance, while in general they detected no statistically significant temporal trends.

  10. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  11. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  12. Association between labour market trends and trends in young people's mental health in ten European countries 1983-2005.

    Science.gov (United States)

    Lager, Anton C J; Bremberg, Sven G

    2009-09-08

    Mental health problems have become more common among young people over the last twenty years, especially in certain countries. The reasons for this have remained unclear. The hypothesis tested in this study is that national trends in young people's mental health are associated with national trends in young people's labour market. National secular changes in the proportion of young people with mental health problems and national secular labour market changes were studied from 1983 to 2005 in Austria, Belgium, Denmark, Finland, Hungary, Norway, Spain, Sweden, Switzerland and the United Kingdom. The correlation between the national secular changes in the proportion of young people not in the labour force and the national secular changes in proportion of young people with mental health symptoms was 0.77 for boys and 0.92 for girls. Labour market trends may have contributed to the deteriorating trend in mental health among young people. A true relationship, should other studies confirm it, would be an important aspect to take into account when forming labour market policies or policies concerning the delivery of higher education.

  13. Food Trends.

    Science.gov (United States)

    Schwenk, Nancy E.

    1991-01-01

    An overall perspective on trends in food consumption is presented. Nutrition awareness is at an all-time high; consumption is influenced by changes in disposable income, availability of convenience foods, smaller household size, and an increasing proportion of ethnic minorities in the population. (18 references) (LB)

  14. Association between labour market trends and trends in young people's mental health in ten European countries 1983-2005

    Directory of Open Access Journals (Sweden)

    Lager Anton CJ

    2009-09-01

    Full Text Available Abstract Background Mental health problems have become more common among young people over the last twenty years, especially in certain countries. The reasons for this have remained unclear. The hypothesis tested in this study is that national trends in young people's mental health are associated with national trends in young people's labour market. Methods National secular changes in the proportion of young people with mental health problems and national secular labour market changes were studied from 1983 to 2005 in Austria, Belgium, Denmark, Finland, Hungary, Norway, Spain, Sweden, Switzerland and the United Kingdom. Results The correlation between the national secular changes in the proportion of young people not in the labour force and the national secular changes in proportion of young people with mental health symptoms was 0.77 for boys and 0.92 for girls. Conclusion Labour market trends may have contributed to the deteriorating trend in mental health among young people. A true relationship, should other studies confirm it, would be an important aspect to take into account when forming labour market policies or policies concerning the delivery of higher education.

  15. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  16. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  17. SYSTEMATIC SAMPLING FOR NON - LINEAR TREND IN MILK YIELD DATA

    OpenAIRE

    Tanuj Kumar Pandey; Vinod Kumar

    2014-01-01

    The present paper utilizes systematic sampling procedures for milk yield data exhibiting some non-linear trends. The best fitted mathematical forms of non-linear trend present in the milk yield data are obtained and the expressions of average variances of the estimators of population mean under simple random, usual systematic and modified systematic sampling procedures have been derived for populations showing non-linear trend. A comparative study is made among the three sampli...

  18. Trends in the training of female urology residents in Canada.

    Science.gov (United States)

    Anderson, Katherine; Tennankore, Karthik; Cox, Ashley

    2017-12-22

    There is limited research on why females do or do not choose a career in urology. Considering the increasing proportion of female medical students, we assessed for trends in female applicants to urology programs in Canada and their post-residency career choices. Data from the Canadian Residency Matching Service (CaRMS) was used (1998-2015). Trends in the proportions of females applying and matching to surgical subspecialties, and applying and matching to urology were computed. Surveys were sent to urology program directors to assess female residents' chosen career paths over the last decade. A significant increasing trend in the proportion of females applying to urology as their first choice program was found (0.19 in 1998-99 to 0.27 in 2012-15; p=0.04). An increasing trend in the proportion of females successfully matching to urology was found, although it was not statistically significant (0.13 in 1998-99 to 0.24 in 2012-15; p=0.07). This was in keeping with the trends found for surgical programs overall. Female graduates choose a variety of career paths with urogynecology being the most common fellowship (26%). The last two decades has seen an increase in the proportion of female students applying to urology in Canada. Female urology graduates pursue a variety of career paths. It remains imperative that both female and male medical students have early exposure and education about our subspecialty to ensure we continue to recruit the most talented candidates.

  19. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  20. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  1. Meaningful Effect Sizes, Intraclass Correlations, and Proportions of Variance Explained by Covariates for Planning Two- and Three-Level Cluster Randomized Trials of Social and Behavioral Outcomes.

    Science.gov (United States)

    Dong, Nianbo; Reinke, Wendy M; Herman, Keith C; Bradshaw, Catherine P; Murray, Desiree W

    2016-09-30

    There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). We calculated the empirical reference values on critical research design parameters associated with statistical power for children's social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R 2 ). Children from kindergarten to Grade 5 in the samples from four large CRTs evaluating the effectiveness of two classroom- and two school-level preventive interventions. Teacher ratings of students' social and behavioral outcomes using the Teacher Observation of Classroom Adaptation-Checklist and the Social Competence Scale-Teacher. Two types of effect size benchmarks were calculated: (1) normative expectations for change and (2) policy-relevant demographic performance gaps. The ICCs and R 2 were calculated using two-level hierarchical linear modeling (HLM), where students are nested within schools, and three-level HLM, where students were nested within classrooms, and classrooms were nested within schools. Comprehensive tables of benchmarks and ICC values are provided to inform prevention researchers in interpreting the effect size of interventions and conduct power analyses for designing CRTs of children's social and behavioral outcomes. The discussion also provides a demonstration for how to use the parameter reference values provided in this article to calculate the sample size for two- and three-level CRTs designs. © The Author(s) 2016.

  2. Heritability of blood pressure traits and the genetic contribution to blood pressure variance explained by four blood-pressure-related genes.

    NARCIS (Netherlands)

    Rijn, M.J. van; Schut, A.F.; Aulchenko, Y.S.; Deinum, J.; Sayed-Tabatabaei, F.A.; Yazdanpanah, M.; Isaacs, A.; Axenovich, T.I.; Zorkoltseva, I.V.; Zillikens, M.C.; Pols, H.A.; Witteman, J.C.; Oostra, B.A.; Duijn, C.M. van

    2007-01-01

    OBJECTIVE: To study the heritability of four blood pressure traits and the proportion of variance explained by four blood-pressure-related genes. METHODS: All participants are members of an extended pedigree from a Dutch genetically isolated population. Heritability and genetic correlations of

  3. Trends in workforce diversity in vascular surgery programs in the United States.

    Science.gov (United States)

    Kane, Katherine; Rosero, Eric B; Clagett, G Patrick; Adams-Huet, Beverley; Timaran, Carlos H

    2009-06-01

    U.S. black and Hispanic populations are growing at a steady pace. In contrast, the medical profession lacks the same minority growth and representation. Women are also under-represented in many surgical disciplines. The purpose of this study was to assess trends in the proportion of women, blacks, and Hispanics admitted to vascular surgery (VS) and related specialties, and to compare them with each other and with a surgical specialty, orthopedic surgery (OS), with a formal diversity initiative. Data on the fellowship pool of VS, interventional radiology (IR), and interventional cardiology (IC), as well as the resident pools of general surgery (GS) and orthopedic surgery (OS), were obtained from U.S. graduate medical education reports for 1999 through 2005. Cochrane-Armitage trend tests were used to assess trends in the proportion of females, blacks, and Hispanics in relation to the total physician workforce for each subspecialty. No significant trends in the proportion of females, blacks, or Hispanics accepted into VS and IC fellowship programs occurred during the study period. In contrast, IR, GS, and OS programs revealed significant trends for increasing proportions of at least one of the underrepresented study groups. In particular, OS, which has implemented a diversity awareness program, showed a positive trend in female and Hispanic trainees (P workforce diversity.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  6. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  7. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  8. Fringe biasing: A variance reduction technique for optically thick meshes

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  9. Fringe biasing: A variance reduction technique for optically thick meshes

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R. P.

    2013-01-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  10. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  11. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  12. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  13. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  14. Proportionality lost - proportionality regained?

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2010-01-01

    In recent years, the European Court of Justice (the ECJ) seems to have accepted restrictions on the freedom of establishment and other basic freedoms, despite the fact that a more thorough proportionality test would have revealed that the restriction in question did not pass the 'rule of reason' ...

  15. TechTrends 2010-2015: A Content Analysis

    Science.gov (United States)

    Stauffer, Eric

    2017-01-01

    This study is a content analysis of articles published within the journal "TechTrends" from 2000 to 2015. The study reveals that the publication "TechTrends" has increased the overall number of peer reviewed original papers over the last 6 years. The author describes the proportion of these original papers per volume and…

  16. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  17. Space-time trends in U.S. meteorological droughts

    Directory of Open Access Journals (Sweden)

    Poulomi Ganguli

    2016-12-01

    New hydrological insights for the region: The paper finds spatial coverage of extreme meteorological drought in the recent years (post-2010 exceeds that of the iconic droughts of the 1930s (the Dust Bowl era, and the 1950s. These results are in contrast with trends in spatial variance that does not exhibit any statistically significant trend. In addition, we find drought persistence remains relatively stationary over the last half century. The findings can inform drought monitoring and planning, and improve future drought resilience.

  18. Quantitative genetic variance and multivariate clines in the Ivyleaf morning glory, Ipomoea hederacea.

    Science.gov (United States)

    Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R

    2014-08-19

    Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  20. Photon and neutron dose discrimination using low pressure proportional counters with graphite and A150 walls

    International Nuclear Information System (INIS)

    Kylloenen, J.; Lindborg, L.

    2005-01-01

    Full text: The determination of both the low- and high-LET components of ambient dose equivalent in mixed fields is possible with microdosimetric methods. With the multiple-event microdosimetric variance covariance method the sum of those components are directly obtained also in pulsed beams. However, if the value of each dose component is needed a more extended analysis is required. The use of a graphite walled proportional detector in combination with a tissue-equivalent proportional counter in combination with the variance covariance method was here investigated. MCNP simulations were carried out for relevant energies to investigate the photon and neutron responses of the two detectors. The combined graphite and TEPC system, the Sievert instrument, was used for measurements at IRSN, Cadarache, in the workplace calibration fields of CANEL+, SIGMA, a Cf-252 and a moderated Cf(D 2 O,Cd) radiation field. The response of the instrument in various monoenergetic neutron fields is also known from measurements at PTB. The instrument took part in the measurement campaigns in workplace fields in the nuclear industry organized within the EVIDOS contract. The results are analyzed and the method of using a graphite detector compared with alternative methods of analysis is discussed. (author)

  1. Variance in total levels of phospholipase C zeta (PLC-ζ) in human sperm may limit the applicability of quantitative immunofluorescent analysis as a diagnostic indicator of oocyte activation capability.

    Science.gov (United States)

    Kashir, Junaid; Jones, Celine; Mounce, Ginny; Ramadan, Walaa M; Lemmon, Bernadette; Heindryckx, Bjorn; de Sutter, Petra; Parrington, John; Turner, Karen; Child, Tim; McVeigh, Enda; Coward, Kevin

    2013-01-01

    To examine whether similar levels of phospholipase C zeta (PLC-ζ) protein are present in sperm from men whose ejaculates resulted in normal oocyte activation, and to examine whether a predominant pattern of PLC-ζ localization is linked to normal oocyte activation ability. Laboratory study. University laboratory. Control subjects (men with proven oocyte activation capacity; n = 16) and men whose sperm resulted in recurrent intracytoplasmic sperm injection failure (oocyte activation deficient [OAD]; n = 5). Quantitative immunofluorescent analysis of PLC-ζ protein in human sperm. Total levels of PLC-ζ fluorescence, proportions of sperm exhibiting PLC-ζ immunoreactivity, and proportions of PLC-ζ localization patterns in sperm from control and OAD men. Sperm from control subjects presented a significantly higher proportion of sperm exhibiting PLC-ζ immunofluorescence compared with infertile men diagnosed with OAD (82.6% and 27.4%, respectively). Total levels of PLC-ζ in sperm from individual control and OAD patients exhibited significant variance, with sperm from 10 out of 16 (62.5%) exhibiting levels similar to OAD samples. Predominant PLC-ζ localization patterns varied between control and OAD samples with no predictable or consistent pattern. The results indicate that sperm from control men exhibited significant variance in total levels of PLC-ζ protein, as well as significant variance in the predominant localization pattern. Such variance may hinder the diagnostic application of quantitative PLC-ζ immunofluorescent analysis. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  2. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  3. Proportional reasoning

    DEFF Research Database (Denmark)

    Dole, Shelley; Hilton, Annette; Hilton, Geoff

    2015-01-01

    Proportional reasoning is widely acknowledged as a key to success in school mathematics, yet students’ continual difficulties with proportion-related tasks are well documented. This paper draws on a large research study that aimed to support 4th to 9th grade teachers to design and implement tasks...

  4. Time trends in heavy drinking among middle-aged and older adults in Denmark

    DEFF Research Database (Denmark)

    Bjørk, Christina; Thygesen, Lau Caspar; Vinther-Larsen, Mathilde

    2008-01-01

    BACKGROUND: Studies have indicated an increasing proportion of heavy drinking among middle-aged and older Danes. Trends in consumption are often extremely sensitive to influence from various components of the time trends but only few have explored the age, period and cohort-related influences...... that the proportion of heavy drinking women increases in younger birth cohorts. This trend is not observed for men as their drinking pattern mainly increase slightly by calendar year. CONCLUSIONS: Our Danish observations for older aged individuals correspond to the social and cultural changes in the 1960s and 1970s...

  5. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  6. Trends in scale and shape of survival curves.

    Science.gov (United States)

    Weon, Byung Mook; Je, Jung Ho

    2012-01-01

    The ageing of the population is an issue in wealthy countries worldwide because of increasing costs for health care and welfare. Survival curves taken from demographic life tables may help shed light on the hypotheses that humans are living longer and that human populations are growing older. We describe a methodology that enables us to obtain separate measurements of scale and shape variances in survival curves. Specifically, 'living longer' is associated with the scale variance of survival curves, whereas 'growing older' is associated with the shape variance. We show how the scale and shape of survival curves have changed over time during recent decades, based on period and cohort female life tables for selected wealthy countries. Our methodology will be useful for performing better tracking of ageing statistics and it is possible that this methodology can help identify the causes of current trends in human ageing.

  7. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  8. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  9. Trends in hospital-physician integration in medical oncology.

    Science.gov (United States)

    Clough, Jeffrey D; Dinan, Michaela A; Schulman, Kevin A

    2017-10-01

    Hospitals have rapidly acquired medical oncology practices in recent years. Experts disagree as to whether these trends are related to oncology-specific market factors or reflect a general trend of hospital-physician integration. The objective of this study was to compare the prevalence, geographic variation, and trends in physicians billing from hospital outpatient departments in medical oncology with other specialties. Retrospective analysis of Medicare claims data for 2012 and 2013. We calculated the proportion of physicians and practitioners in the 15 highest-volume specialties who billed the majority of evaluation and management visits from hospital outpatient departments in each year, nationally and by state. We included 338,998 and 352,321 providers in 2012 and 2013, respectively, of whom 9715 and 9969 were medical oncologists. Among the 15 specialties examined, medical oncology had the highest proportion of hospital outpatient department billing in 2012 and 2013 (35.0% and 38.3%, respectively). Medical oncology also experienced the greatest absolute change (3.3%) between the years, followed by thoracic surgery (2.4%) and cardiology (2.0%). There was marked state-level variation, with the proportion of medical oncologists based in hospital outpatient departments ranging from 0% in Nevada to 100% in Idaho. Hospital-physician integration has been more pronounced in medical oncology than in other high-volume specialties and is increasing at a faster rate. Policy makers should take these findings into consideration, particularly with respect to recent proposals that may continue to fuel these trends.

  10. A Matter of Balance: Motor Control is Related to Children’s Spatial and Proportional Reasoning Skills

    Science.gov (United States)

    Frick, Andrea; Möhring, Wenke

    2016-01-01

    Recent research has shown close links between spatial and mathematical thinking and between spatial abilities and motor skills. However, longitudinal research examining the relations between motor, spatial, and mathematical skills is rare, and the nature of these relations remains unclear. The present study thus investigated the relation between children’s motor control and their spatial and proportional reasoning. We measured 6-year-olds’ spatial scaling (i.e., the ability to reason about different-sized spaces), their mental transformation skills, and their ability to balance on one leg as an index for motor control. One year later (N = 126), we tested the same children’s understanding of proportions. We also assessed several control variables (verbal IQ and socio-economic status) as well as inhibitory control, visuo-spatial and verbal working memory. Stepwise hierarchical regressions showed that, after accounting for effects of control variables, children’s balance skills significantly increased the explained variance in their spatial performance and proportional reasoning. Our results suggest specific relations between balance skills and spatial as well as proportional reasoning skills that cannot be explained by general differences in executive functioning or intelligence. PMID:26793157

  11. Determinations of dose mean of specific energy for conventional x-rays by variance-measurements

    International Nuclear Information System (INIS)

    Forsberg, B.; Jensen, M.; Lindborg, L.; Samuelson, G.

    1978-05-01

    The dose mean value (zeta) of specific energy of a single event distribution is related to the variance of a multiple event distribution in a simple way. It is thus possible to determine zeta from measurements in high dose rates through observations of the variations in the ionization current from for instance an ionization chamber, if other parameters contribute negligibly to the total variance. With this method is has earlier been possible to obtain results down to about 10 nm in a beam of Co60-γ rays, which is one order of magnitude smaller than the sizes obtainable with the traditional technique. This advantage together with the suggestion that zeta could be an important parameter in radiobiology make further studies of the applications of the technique motivated. So far only data from measurements in beams of a radioactive nuclide has been reported. This paper contains results from measurements in a highly stabilized X-ray beam. The preliminary analysis shows that the variance technique has given reasonable results for object sizes in the region of 0.08 μm to 20 μm (100 kV, 1.6 Al, HVL 0.14 mm Cu). The results were obtained with a proportional counter except for the larger object sizes, where an ionization chamber was used. The measurements were performed at dose rates between 1 Gy/h and 40 Gy/h. (author)

  12. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  13. TRENDS OF ROMANIAN BANKING NETWORK DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Nicoleta Georgeta PANAIT

    2015-07-01

    Full Text Available Since 2009, two trends occurred in the banking world: downsizing of personnel, on the one hand and the reduction of retail units held, on the other hand. The first trend was most notable in countries with unstable or weak economy. The effects were seen immediately. Reducing of the operating costs and more applied of the territorial structure and staff was a decision that credit institutions in Romania took relatively late. Worldwide banks began a restructuring otherwise dictated by this time not so economic crises new market trends - increasing access to the internet for the population and use of the internet in a growing proportion of internet banking

  14. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  15. Investigations of X-ray response of single wire anode Ar-N2 flow type gas scintillation proportional counters

    International Nuclear Information System (INIS)

    Garg, S.P.; Sharma, R.C.

    1984-01-01

    The X-ray response of single wire anode gas scintillation proportional counters of two different geometries operated with argon+nitrogen gases in continuous flow has been investigated with wire anodes of diameters 25 μm to 1.7 mm. An energy resolution of 19% is obtained for 5.9 keV X-rays entering the counter perpendicular to the anode in pill-box geometry with 25 μm diameter anode. With cylindrical geometry counters energy obtained at 5.9 keV are 18%, 24% and 33% for 50 μm, 0.5 mm and 1.7 mm diameter anodes respectively. An analysis of the observed resolution shows that the contribution from photon counting statistics to the relative variance of scintillation pulses even for X-rays in Ar-N 2 single wire anode gas scintillation proportional counters is small and is not a limiting factor. The energy resolution with thicker anodes, where the contribution from the variance of the charge multiplication factor also has been minimised, is found to deteriorate mainly by the interaction in the scintillation production region. Comments are made on the possibility of improvement in energy resolution by suppression of pulses due to such interactions with the help of the pulse risetime discrimination technique. (orig.)

  16. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  17. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  18. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  19. Trends in childhood injury mortality in South African population ...

    African Journals Online (AJOL)

    Trends in major causes of injury mortality and the proportion of total deaths attributable to injuries trom 1968·to 1985 tor white, coloured and Asian children < 15 years in the RSA were examined. There were 937 injury deaths in 1968 and 853 in 1985 but no clear trends in overall mortality rates were observed. There were ...

  20. Trends of tuberculosis prevalence and treatment outcome in an ...

    African Journals Online (AJOL)

    The annual number of all TB cases showed a rising trend from 914 cases in the year 2000 to 1684 in 2009; but the proportion of new sputum smear (ss+) pulmonary tuberculosis (PTB) cases declined (Trend X2 = 7.37, P = 0.007). The average number of extra-pulmonary TB cases increased fourfold from 2000-2004 to ...

  1. Genetic and environmental variances of bone microarchitecture and bone remodeling markers: a twin study.

    Science.gov (United States)

    Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego

    2015-03-01

    All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ  = 0.49) than DZ (rDZ  = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than

  2. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  3. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  4. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  5. Impact of intrauterine growth retardation and body proportionality on fetal and neonatal outcome.

    Science.gov (United States)

    Kramer, M S; Olivier, M; McLean, F H; Willis, D M; Usher, R H

    1990-11-01

    Previous prognostic studies of infants with intrauterine growth retardation (IUGR) have not adequately considered the heterogeneity of IUGR in terms of cause, severity, and body proportionality and have been prone to misclassification of IUGR because of errors in estimation of gestational age. Based on a cohort of 8719 infants with early-ultrasound-validated gestational ages and indexes of body proportionality standardized for birth weight, the consequences of severity and cause-specific IUGR and proportionality for fetal and neonatal morbidity and mortality were assessed. With progressive severity of IUGR, there were significant (all P less than .001) linear trends for increasing risks of stillbirth, fetal distress (abnormal electronic fetal heart tracings)O during parturition, neonatal hypoglycemia (minimum plasma glucose less than 40 mg/dL), hypocalcemia (minimum Ca less than 7 mg/dL), polycythemia (maximum capillary hemoglobin greater than or equal to 21 g/dL), severe depression at birth (manual ventilation greater than 3 minutes), 1-minute and 5-minute Apgar scores less than or equal to 6, 1-minute Apgar score less than or equal to 3, and in-hospital death. These trends persisted for the more common outcomes even after restriction to term (37 to 42 weeks) births. There was no convincing evidence that outcome among infants with a given degree of growth retardation varied as a function of cause of that growth retardation. Among infants with IUGR, increased length-for-weight had significant crude associations with hypoglycemia and polycythemia, but these associations disappeared after adjustment for severity of growth retardation and gestational age.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. De-trending of wind speed variance based on first-order and second-order statistical moments only

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2014-01-01

    The lack of efficient methods for de-trending of wind speed resource data may lead to erroneous wind turbine fatigue and ultimate load predictions. The present paper presents two models, which quantify the effect of an assumed linear trend on wind speed standard deviations as based on available...... statistical data only. The first model is a pure time series analysis approach, which quantifies the effect of non-stationary characteristics of ensemble mean wind speeds on the estimated wind speed standard deviations as based on mean wind speed statistics only. This model is applicable to statistics...... of arbitrary types of time series. The second model uses the full set of information and includes thus additionally observed wind speed standard deviations to estimate the effect of ensemble mean non-stationarities on wind speed standard deviations. This model takes advantage of a simple physical relationship...

  7. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  8. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  9. Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)

    2006-07-01

    In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)

  10. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  11. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  12. Reference population for international comparisons and time trend surveillance of preterm delivery proportions in three countries

    DEFF Research Database (Denmark)

    Morken, N.H.; Vogel, I.; Kallen, K.

    2008-01-01

    BACKGROUND: International comparison and time trend surveillance of preterm delivery rates is complex. New techniques that could facilitate interpretation of such rates are needed. METHODS: We studied all live births and stillbirths (>or= 28 weeks gestation) registered in the medical birth...

  13. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  14. The Golden Ratio of Gait Harmony: Repetitive Proportions of Repetitive Gait Phases

    Directory of Open Access Journals (Sweden)

    Marco Iosa

    2013-01-01

    Full Text Available In nature, many physical and biological systems have structures showing harmonic properties. Some of them were found related to the irrational number known as the golden ratio that has important symmetric and harmonic properties. In this study, the spatiotemporal gait parameters of 25 healthy subjects were analyzed using a stereophotogrammetric system with 25 retroreflective markers located on their skin. The proportions of gait phases were compared with , the value of which is about 1.6180. The ratio between the entire gait cycle and stance phase resulted in 1.620 ± 0.058, that between stance and the swing phase was 1.629 ± 0.173, and that between swing and the double support phase was 1.684 ± 0.357. All these ratios did not differ significantly from each other (, , repeated measure analysis of variance or from (, resp., t-tests. The repetitive gait phases of physiological walking were found in turn in repetitive proportions with each other, revealing an intrinsic harmonic structure. Harmony could be the key for facilitating the control of repetitive walking. Harmony is a powerful unifying factor between seemingly disparate fields of nature, including human gait.

  15. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  16. Is proportion burned severely related to daily area burned?

    International Nuclear Information System (INIS)

    Birch, Donovan S; Morgan, Penelope; Smith, Alistair M S; Kolden, Crystal A; Hudak, Andrew T

    2014-01-01

    The ecological effects of forest fires burning with high severity are long-lived and have the greatest impact on vegetation successional trajectories, as compared to low-to-moderate severity fires. The primary drivers of high severity fire are unclear, but it has been hypothesized that wind-driven, large fire-growth days play a significant role, particularly on large fires in forested ecosystems. Here, we examined the relative proportion of classified burn severity for individual daily areas burned that occurred during 42 large forest fires in central Idaho and western Montana from 2005 to 2007 and 2011. Using infrared perimeter data for wildfires with five or more consecutive days of mapped perimeters, we delineated 2697 individual daily areas burned from which we calculated the proportions of each of three burn severity classes (high, moderate, and low) using the differenced normalized burn ratio as mapped for large fires by the Monitoring Trends in Burn Severity project. We found that the proportion of high burn severity was weakly correlated (Kendall τ = 0.299) with size of daily area burned (DAB). Burn severity was highly variable, even for the largest (95th percentile) in DAB, suggesting that other variables than fire extent influence the ecological effects of fires. We suggest that these results do not support the prioritization of large runs during fire rehabilitation efforts, since the underlying assumption in this prioritization is a positive relationship between severity and area burned in a day. (letters)

  17. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  18. [Trends among medical students towards general practice or specialization].

    Science.gov (United States)

    Breinbauer K, Hayo; Fromm R, Germán; Fleck L, Daniela; Araya C, Luis

    2009-07-01

    A 60/40 ratio has been estimated as a country's ideal proportion between general practitioners and specialists. In Chile this proportion was 36/ 64 in 2004, exactly the opposite of the ideal. Trends towards specialization or general practice among medical students have not been thoughtfully studied. To assess trends among medical students towards becoming general practitioners or specialists, exploring associated factors. Descriptive survey of 822 first to seventh year medical students at the University of Chile, School of Medicine. Desired activity to pursue (general practice or specialization) after graduation and general orientations within clinical practice were explored. Fifty three percent of students desired to enter a specialization program. Only 20% would work as a general practitioner (27% were still indecisive). Furthermore, a trend in early years of medical training towards an integral medicine is gradually reversed within later years. Seventh year students give significantly more importance to specialization than to integral medicine (p specialized medicine in the teaching environment. Most students prefer to enter a specialization program immediately after finishing medical school. Moreover, there is a social trend, at least within the teacher-attending environment, promoting not only the desire to specialize, but a pro-specialist culture.

  19. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  20. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  1. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  2. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  3. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  4. Trends in wilderness recreation use characteristics

    Science.gov (United States)

    Alan E. Watson; David N. Cole; Joseph W. Roggenbuck

    1995-01-01

    Recent studies at the Leopold Institute have included analysis of use and user trends at the Boundary Waters Canoe Area Wilderness, Desolation Wilderness, Shining Rock Wilderness, the Bob Marshall Wilderness Complex, Great Smoky Mountains National Park and Eagle Cap Wilderness. Some sociodemographics, like age, education, and the proportion of female visitors, have...

  5. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  6. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  7. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  8. Trends in substance use admissions among older adults.

    Science.gov (United States)

    Chhatre, Sumedha; Cook, Ratna; Mallik, Eshita; Jayadevappa, Ravishankar

    2017-08-22

    Substance abuse is a growing, but mostly silent, epidemic among older adults. We sought to analyze the trends in admissions for substance abuse treatment among older adults (aged 55 and older). Treatment Episode Data Set - Admissions (TEDS-A) for period between 2000 and 2012 was used. The trends in admission for primary substances, demographic attributes, characteristics of substance abused and type of admission were analyzed. While total number of substance abuse treatment admissions between 2000 and 2012 changed slightly, proportion attributable to older adults increased from 3.4% to 7.0%. Substantial changes in the demographic, substance use pattern, and treatment characteristics for the older adult admissions were noted. Majority of the admissions were for alcohol as the primary substance. However there was a decreasing trend in this proportion (77% to 64%). The proportion of admissions for following primary substances showed increase: cocaine/crack, marijuana/hashish, heroin, non-prescription methadone, and other opiates and synthetics. Also, admissions for older adults increased between 2000 and 2012 for African Americans (21% to 28%), females (20% to 24%), high school graduates (63% to 75%), homeless (15% to 19%), unemployed (77% to 84%), and those with psychiatric problems (17% to 32%).The proportion of admissions with prior history of substance abuse treatment increased from 39% to 46% and there was an increase in the admissions where more than one problem substance was reported. Ambulatory setting continued to be the most frequent treatment setting, and individual (including self-referral) was the most common referral source. The use of medication assisted therapy remained low over the years (7% - 9%). The changing demographic and substance use pattern of older adults implies that a wide array of psychological, social, and physiological needs will arise. Integrated, multidisciplinary and tailored policies for prevention and treatment are necessary to

  9. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  10. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  11. Estimation of the false discovery proportion with unknown dependence.

    Science.gov (United States)

    Fan, Jianqing; Han, Xu

    2017-09-01

    Large-scale multiple testing with correlated test statistics arises frequently in many scientific research. Incorporating correlation information in approximating false discovery proportion has attracted increasing attention in recent years. When the covariance matrix of test statistics is known, Fan, Han & Gu (2012) provided an accurate approximation of False Discovery Proportion (FDP) under arbitrary dependence structure and some sparsity assumption. However, the covariance matrix is often unknown in many applications and such dependence information has to be estimated before approximating FDP. The estimation accuracy can greatly affect FDP approximation. In the current paper, we aim to theoretically study the impact of unknown dependence on the testing procedure and establish a general framework such that FDP can be well approximated. The impacts of unknown dependence on approximating FDP are in the following two major aspects: through estimating eigenvalues/eigenvectors and through estimating marginal variances. To address the challenges in these two aspects, we firstly develop general requirements on estimates of eigenvalues and eigenvectors for a good approximation of FDP. We then give conditions on the structures of covariance matrices that satisfy such requirements. Such dependence structures include banded/sparse covariance matrices and (conditional) sparse precision matrices. Within this framework, we also consider a special example to illustrate our method where data are sampled from an approximate factor model, which encompasses most practical situations. We provide a good approximation of FDP via exploiting this specific dependence structure. The results are further generalized to the situation where the multivariate normality assumption is relaxed. Our results are demonstrated by simulation studies and some real data applications.

  12. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  13. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  14. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  15. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  16. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  17. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  18. Leprosy and gender in Brazil: trends in an endemic area of the Northeast region, 2001-2014.

    Science.gov (United States)

    Souza, Eliana Amorim de; Ferreira, Anderson Fuentes; Boigny, Reagan Nzundu; Alencar, Carlos Henrique; Heukelbach, Jorg; Martins-Melo, Francisco Rogerlândio; Barbosa, Jaqueline Caracas; Ramos, Alberto Novaes

    2018-01-01

    OBJECTIVE To analyze, stratifield by gender, trends of the new case leprosy detection rates in the general population and in children; of grade 2 disability, and of proportion of multibacillary cases, in the state of Bahia, Brazil from 2001 to 2014. METHODS A time series study based on leprosy data from the National Information System for Notifiable Diseases. The time trend analysis included Poisson regression models by infection points (Joinpoint) stratified by gender. RESULTS There was a total of 40,054 new leprosy cases with a downward trend of the overall detection rate (Average Annual Percent Change [AAPC = -0.4, 95%CI -2.8-1.9] and a non-significant increase in children under 15 years (AAPC = 0.2, 95%CI -3.9-4.5). The proportion of grade 2 disability among new cases increased significantly (AAPC = 4.0, 95%CI 1.3-6.8), as well as the proportion of multibacillary cases (AAPC = 2.2, 95%CI 0.1-4.3). Stratification by gender showed a downward trend of detection rates in females and no significant change in males; in females, there was a more pronounced upward trend of the proportion of multibacillary and grade 2 disability cases. CONCLUSIONS Leprosy is still highly endemic in the state of Bahia, with active transmission, late diagnosis, and a probable hidden endemic. There are different gender patterns, indicating the importance of early diagnosis and prompt treatment, specifically in males without neglecting the situation among females.

  19. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  20. Proportionality for Military Leaders

    National Research Council Canada - National Science Library

    Brown, Gary D

    2000-01-01

    .... Especially lacking is a realization that there are four distinct types of proportionality. In determining whether a particular resort to war is just, national leaders must consider the proportionality of the conflict (i.e...

  1. Genetics of human body size and shape: body proportions and indices.

    Science.gov (United States)

    Livshits, Gregory; Roset, A; Yakovenko, K; Trofimov, S; Kobyliansky, E

    2002-01-01

    The study of the genetic component in morphological variables such as body height and weight, head and chest circumference, etc. has a rather long history. However, only a few studies investigated body proportions and configuration. The major aim of the present study was to evaluate the extent of the possible genetic effects on the inter-individual variation of a number of body configuration indices amenable to clear functional interpretation. Two ethnically different pedigree samples were used in the study: (1) Turkmenians (805 individuals) from Central Asia, and (2) Chuvasha (732 individuals) from the Volga riverside, Russian Federation. To achieve the aim of the present study we proposed three new indices, which were subjected to a statistical-genetic analysis using modified version of "FISHER" software. The proposed indices were: (1) an integral index of torso volume (IND#1), an index reflecting a predisposition of body proportions to maintain a balance in a vertical position (IND#2), and an index of skeletal extremities volume (IND#3). Additionally, the first two principal factors (PF1 and PF2) obtained on 19 measurements of body length and breadth were subjected to genetic analysis. Variance decomposition analysis that simultaneously assess the contribution of gender, age, additive genetic effects and effects of environment shared by the nuclear family members, was applied to fit variation of the above three indices, and PF1 and PF2. The raw familial correlation of all study traits and in both samples showed: (1) all marital correlations did not differ significantly from zero; (2) parent-offspring and sibling correlations were all positive and statistically significant. The parameter estimates obtained in variance analyses showed that from 40% to 75% of inter-individual variation of the studied traits (adjusted for age and sex) were attributable to genetic effects. For PF1 and PF2 in both samples, and for IND#2 (in Chuvasha pedigrees), significant common sib

  2. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  3. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  4. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  5. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  6. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  7. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  8. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Trends in marine fish catches at Pattani Fishery Port (1999-2003

    Directory of Open Access Journals (Sweden)

    Wanchamai Karntanut

    2006-07-01

    Full Text Available This study aims to develop statistical models for forecasting the quantity of the various types of marine fish landed at Pattani Fishery Port, allowing for trend and seasonality, using official data during 1999-2003. The data comprise daily and monthly totals by weight for eight types of fish (mackerel, other food fish, squid, scads, trash fish, shrimp, lobster and crab. The statistical methods are one-way analysis of variance, multiple linear regression and time series forecasting using trend and seasonal models. It is found that mackerel, other food fish and squid catches tend to decrease, whereas the catches of scads tend to increase, and trash fish catches have no detectable trend up or down. Shrimp and lobster tend to decrease exponentially, and the trend of crab catch is constant. This study raises questions about the ecological and economic sustainability of the current fisheries policy in Thailand.

  10. Changing trends in blood transfusion: an analysis of 244,013 hospitalizations.

    Science.gov (United States)

    Shehata, Nadine; Forster, Alan; Lawrence, Nadine; Rothwell, Deanna M; Fergusson, Dean; Tinmouth, Alan; Wilson, Kumanan

    2014-10-01

    Identifying recipients of blood transfusion and the trends in transfusion are needed to properly identify and target clinical services in need of patient blood management strategies. We determined the proportion of admissions to each clinical service that received blood, the mean number of units utilized, and the 5-year trends in utilization. We used a large administrative database, a repository for three campuses of one university-affiliated hospital, and included all adults that were hospitalized from November 1, 2006, to June 2012. The data were analyzed as the proportion of admissions transfused and the mean number units transfused per admission. Of 244,013 hospitalizations, 38,265 received at least one transfusion (29,165 for red blood cells [RBCs], 6760 for plasma, and 5795 for platelets [PLTs]). Although there has been a gradual decrease in the mean number of RBCs transfused (percent change, -9.8%; p = 0.002), an increase in the proportion of admissions receiving RBCs (17.2% increase, p conservation strategies. © 2014 AABB.

  11. Robust estimation of the proportion of treatment effect explained by surrogate marker information.

    Science.gov (United States)

    Parast, Layla; McDermott, Mary M; Tian, Lu

    2016-05-10

    In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  13. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  14. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  15. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  16. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  17. Component aging and reliability trends in Loviisa Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jankala, K.E.; Vaurio, J.K.

    1989-01-01

    A plant-specific reliability data collection and analysis system has been developed at the Loviisa Nuclear Power Plant to perform tests for component aging and analysis of reliability trends. The system yields both mean values an uncertainty distribution information for reliability parameters to be used in the PSA project underway and in living-PSA applications. Several different trend models are included in the reliability analysis system. Simple analytical expressions have been derived from the parameters of these models, and their variances have been obtained using the information matrix. This paper is focused on the details of the learning/aging models and the estimation of their parameters and statistical accuracies. Applications to the historical data of the Loviisa plant are presented. The results indicate both up- and down-trends in failure rates as well as individuality between nominally identical components

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  19. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  20. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  1. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  2. Temporal trends in pharmacology publications by pharmacy institutes: A deeper dig

    OpenAIRE

    Bhatt, Parloop Amit; Patel, Zarana

    2016-01-01

    Objective: Publications in Indian Journal of Pharmacology (IJP) are the face of contemporary pharmacology practices followed in health-care profession - a knowledge-based profession. It depicts trends in terms of quantity (proportions), quality, type (preclinical/clinical), thrust areas, etc., of pharmacology followed by biomedical community professions both nationally and internationally. This article aims to establish temporal trends in pharmacology research by pharmacy institutes in light ...

  3. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  4. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  5. A Categorical Content Analysis of Highly Cited Literature Related to Trends and Issues in Special Education.

    Science.gov (United States)

    Arden, Sarah V; Pentimonti, Jill M; Cooray, Rochana; Jackson, Stephanie

    2017-07-01

    This investigation employs categorical content analysis processes as a mechanism to examine trends and issues in a sampling of highly cited (100+) literature in special education journals. The authors had two goals: (a) broadly identifying trends across publication type, content area, and methodology and (b) specifically identifying articles with disaggregated outcomes for students with learning disabilities (LD). Content analyses were conducted across highly cited (100+) articles published during a 20-year period (1992-2013) in a sample ( n = 3) of journals focused primarily on LD, and in one broad, cross-categorical journal recognized for its impact in the field. Results indicated trends in the article type (i.e., commentary and position papers), content (i.e., reading and behavior), and methodology (i.e., small proportions of experimental and quasi-experimental designs). Results also revealed stability in the proportion of intervention research studies when compared to previous analyses and a decline in the proportion of those that disaggregated data specifically for students with LD.

  6. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  7. Meta-analyses of the proportion of Japanese encephalitis virus infection in vectors and vertebrate hosts.

    Science.gov (United States)

    Oliveira, Ana R S; Cohnstaedt, Lee W; Strathe, Erin; Hernández, Luciana Etcheverry; McVey, D Scott; Piaggio, José; Cernicchiaro, Natalia

    2017-09-07

    Japanese encephalitis (JE) is a zoonosis in Southeast Asia vectored by mosquitoes infected with the Japanese encephalitis virus (JEV). Japanese encephalitis is considered an emerging exotic infectious disease with potential for introduction in currently JEV-free countries. Pigs and ardeid birds are reservoir hosts and play a major role on the transmission dynamics of the disease. The objective of the study was to quantitatively summarize the proportion of JEV infection in vectors and vertebrate hosts from data pertaining to observational studies obtained in a systematic review of the literature on vector and host competence for JEV, using meta-analyses. Data gathered in this study pertained to three outcomes: proportion of JEV infection in vectors, proportion of JEV infection in vertebrate hosts, and minimum infection rate (MIR) in vectors. Random-effects subgroup meta-analysis models were fitted by species (mosquito or vertebrate host species) to estimate pooled summary measures, as well as to compute the variance between studies. Meta-regression models were fitted to assess the association between different predictors and the outcomes of interest and to identify sources of heterogeneity among studies. Predictors included in all models were mosquito/vertebrate host species, diagnostic methods, mosquito capture methods, season, country/region, age category, and number of mosquitos per pool. Mosquito species, diagnostic method, country, and capture method represented important sources of heterogeneity associated with the proportion of JEV infection; host species and region were considered sources of heterogeneity associated with the proportion of JEV infection in hosts; and diagnostic and mosquito capture methods were deemed important contributors of heterogeneity for the MIR outcome. Our findings provide reference pooled summary estimates of vector competence for JEV for some mosquito species, as well as of sources of variability for these outcomes. Moreover, this

  8. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  9. High Proportions of Multidrug-Resistant Acinetobacter spp. Isolates in a District in Western India: A Four-Year Antibiotic Susceptibility Study of Clinical Isolates

    Directory of Open Access Journals (Sweden)

    Ingvild Odsbu

    2018-01-01

    Full Text Available The purpose of the study was to determine the proportions of multidrug-resistant (MDR Acinetobacter spp. isolates from the district of Nashik in Western India during the period from 2011–2014. Antibacterial susceptibility testing of isolates from inpatients and outpatients was performed using Kirby–Bauer disc diffusion method to determine inhibitory zone diameters. Proportions of non-susceptible isolates were calculated from the antibacterial susceptibility data. MDR was defined as an isolate being non-susceptible to at least one antibacterial agent in at least three antibacterial categories. The change in proportions of MDR isolates; extended-spectrum β-lactamase (ESBL-producing isolates; and non-susceptible isolates to specific antibacterial categories over calendar time was investigated by logistic regression. The proportions of MDR and ESBL-producing isolates ranged from 89.4% to 95.9% and from 87.9% to 94.0%; respectively. The proportions of non-susceptible isolates to aminoglycosides; carbapenems; antipseudomonal penicillins/β-lactamase inhibitors; cephalosporins; folate pathway inhibitors; or penicillins/β-lactamase inhibitors exceeded 77.5%. Proportions of fluoroquinolone and tetracycline non-susceptible isolates ranged from 65.3% to 83.3% and from 71.3% to 75.9%; respectively. No changes in trends were observed over time; except for a decreasing trend in fluoroquinolone non-susceptible isolates (OR = 0.75 (95% CI, 0.62–0.91. Significantly higher proportions of non-susceptible; MDR and ESBL-producing isolates were found among isolates from the respiratory system compared to isolates from all other specimen types (p < 0.05. High proportions of MDR Acinetobacter spp. isolates were observed in the period from 2011–2014. Antimicrobial stewardship programmes are needed to prevent the emergence and spread of antibiotic resistance.

  10. Analogical proportions: another logical view

    Science.gov (United States)

    Prade, Henri; Richard, Gilles

    This paper investigates the logical formalization of a restricted form of analogical reasoning based on analogical proportions, i.e. statements of the form a is to b as c is to d. Starting from a naive set theoretic interpretation, we highlight the existence of two noticeable companion proportions: one states that a is to b the converse of what c is to d (reverse analogy), while the other called paralogical proportion expresses that what a and b have in common, c and d have it also. We identify the characteristic postulates of the three types of proportions and examine their consequences from an abstract viewpoint. We further study the properties of the set theoretic interpretation and of the Boolean logic interpretation, and we provide another light on the understanding of the role of permutations in the modeling of the three types of proportions. Finally, we address the use of these proportions as a basis for inference in a propositional setting, and relate it to more general schemes of analogical reasoning. The differences between analogy, reverse-analogy, and paralogy is still emphasized in a three-valued setting, which is also briefly presented.

  11. Female scarcity reduces women's marital ages and increases variance in men's marital ages.

    Science.gov (United States)

    Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom

    2010-08-05

    When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  12. Female Scarcity Reduces Women's Marital Ages and Increases Variance in Men's Marital Ages

    Directory of Open Access Journals (Sweden)

    Daniel J. Kruger

    2010-07-01

    Full Text Available When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  13. Correction of stream quality trends for the effects of laboratory measurement bias

    Science.gov (United States)

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  14. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  15. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  16. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  17. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  18. A phenomenological SMA model for combined axial–torsional proportional/non-proportional loading conditions

    International Nuclear Information System (INIS)

    Bodaghi, M.; Damanpack, A.R.; Aghdam, M.M.; Shakeri, M.

    2013-01-01

    In this paper, a simple and robust phenomenological model for shape memory alloys (SMAs) is proposed to simulate main features of SMAs under uniaxial as well as biaxial combined axial–torsional proportional/non-proportional loadings. The constitutive model for polycrystalline SMAs is developed within the framework of continuum thermodynamics of irreversible processes. The model nominates the volume fractions of self-accommodated and oriented martensite as scalar internal variables and the preferred direction of oriented martensitic variants as directional internal variable. An algorithm is introduced to develop explicit relationships for the thermo-mechanical behavior of SMAs under uniaxial and biaxial combined axial–torsional proportional/non-proportional loading conditions and also thermal loading. It is shown that the model is able to simulate main aspects of SMAs including self-accommodation, martensitic transformation, orientation and reorientation of martensite, shape memory effect, ferro-elasticity and pseudo-elasticity. A description of the time-discrete counterpart of the proposed SMA model is presented. Experimental results of uniaxial tension and biaxial combined tension–torsion non-proportional tests are simulated and a good qualitative correlation between numerical and experimental responses is achieved. Due to simplicity and accuracy, the model is expected to be used in the future studies dealing with the analysis of SMA devices in which two stress components including one normal and one shear stress are dominant

  19. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  20. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  1. Leprosy and gender in Brazil: trends in an endemic area of the Northeast region, 2001–2014

    Science.gov (United States)

    de Souza, Eliana Amorim; Ferreira, Anderson Fuentes; Boigny, Reagan Nzundu; Alencar, Carlos Henrique; Heukelbach, Jorg; Martins-Melo, Francisco Rogerlândio; Barbosa, Jaqueline Caracas; Ramos, Alberto Novaes

    2018-01-01

    ABSTRACT OBJECTIVE To analyze, stratifield by gender, trends of the new case leprosy detection rates in the general population and in children; of grade 2 disability, and of proportion of multibacillary cases, in the state of Bahia, Brazil from 2001 to 2014. METHODS A time series study based on leprosy data from the National Information System for Notifiable Diseases. The time trend analysis included Poisson regression models by infection points (Joinpoint) stratified by gender. RESULTS There was a total of 40,054 new leprosy cases with a downward trend of the overall detection rate (Average Annual Percent Change [AAPC = -0.4, 95%CI -2.8–1.9] and a non-significant increase in children under 15 years (AAPC = 0.2, 95%CI -3.9–4.5). The proportion of grade 2 disability among new cases increased significantly (AAPC = 4.0, 95%CI 1.3–6.8), as well as the proportion of multibacillary cases (AAPC = 2.2, 95%CI 0.1–4.3). Stratification by gender showed a downward trend of detection rates in females and no significant change in males; in females, there was a more pronounced upward trend of the proportion of multibacillary and grade 2 disability cases. CONCLUSIONS Leprosy is still highly endemic in the state of Bahia, with active transmission, late diagnosis, and a probable hidden endemic. There are different gender patterns, indicating the importance of early diagnosis and prompt treatment, specifically in males without neglecting the situation among females. PMID:29489990

  2. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  3. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  4. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  5. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  6. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  7. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  8. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  9. Optimal allocation of trend following strategies

    Science.gov (United States)

    Grebenkov, Denis S.; Serror, Jeremy

    2015-09-01

    We consider a portfolio allocation problem for trend following (TF) strategies on multiple correlated assets. Under simplifying assumptions of a Gaussian market and linear TF strategies, we derive analytical formulas for the mean and variance of the portfolio return. We construct then the optimal portfolio that maximizes risk-adjusted return by accounting for inter-asset correlations. The dynamic allocation problem for n assets is shown to be equivalent to the classical static allocation problem for n2 virtual assets that include lead-lag corrections in positions of TF strategies. The respective roles of asset auto-correlations and inter-asset correlations are investigated in depth for the two-asset case and a sector model. In contrast to the principle of diversification suggesting to treat uncorrelated assets, we show that inter-asset correlations allow one to estimate apparent trends more reliably and to adjust the TF positions more efficiently. If properly accounted for, inter-asset correlations are not deteriorative but beneficial for portfolio management that can open new profit opportunities for trend followers. These concepts are illustrated using daily returns of three highly correlated futures markets: the E-mini S&P 500, Euro Stoxx 50 index, and the US 10-year T-note futures.

  10. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  11. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  12. The Principle of Proportionality

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Meisner Nielsen, Kasper

    2005-01-01

    Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....

  13. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  14. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  15. A proposal simulated annealing algorithm for proportional parallel flow shops with separated setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2014-08-01

    Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.

  16. Smoking Trends and Disparities Among Black and Non-Hispanic Whites in California.

    Science.gov (United States)

    Sakuma, Kari-Lyn Kobayakawa; Felicitas, Jamie; Fagan, Pebbles; Gruder, Charles L; Blanco, Lyzette; Cappelli, Christopher; Trinidad, Dennis R

    2015-12-01

    The current study examined disparities in smoking trends across Blacks and non-Hispanic Whites in California. Data from the 1996 to 2008 California Tobacco Survey were analyzed to examine trends in smoking behaviors and cessation across Blacks and non-Hispanic Whites. A decrease in overall ever and current smoking was observed for both Black and non-Hispanic Whites across the 12-year time period. A striking decrease in proportions of heavy daily smokers for both Black and non-Hispanic Whites were observed. Proportions of light and intermittent smokers and moderate daily smokers displayed modest increases for Blacks, but large increases for non-Hispanic Whites. Increases in successful cessation were also observed for Blacks and, to a lesser extent, for non-Hispanic Whites. Smoking behavior and cessation trends across Blacks and non-Hispanic Whites were revealing. The decline in heavy daily and former smokers may demonstrate the success and effectiveness of tobacco control efforts in California. However, the increase in proportions of light and intermittent smokers and moderate daily smokers for both Blacks and non-Hispanic Whites demonstrates a need for tobacco cessation efforts focused on lighter smokers. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  18. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  19. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  20. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  4. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Statistical analysis of strait time index and a simple model for trend and trend reversal

    Science.gov (United States)

    Chen, Kan; Jayaprakash, C.

    2003-06-01

    We analyze the daily closing prices of the Strait Time Index (STI) as well as the individual stocks traded in Singapore's stock market from 1988 to 2001. We find that the Hurst exponent is approximately 0.6 for both the STI and individual stocks, while the normal correlation functions show the random walk exponent of 0.5. We also investigate the conditional average of the price change in an interval of length T given the price change in the previous interval. We find strong correlations for price changes larger than a threshold value proportional to T; this indicates that there is no uniform crossover to Gaussian behavior. A simple model based on short-time trend and trend reversal is constructed. We show that the model exhibits statistical properties and market swings similar to those of the real market.

  6. Forecast and analysis of the ratio of electric energy to terminal energy consumption for global energy internet

    Science.gov (United States)

    Wang, Wei; Zhong, Ming; Cheng, Ling; Jin, Lu; Shen, Si

    2018-02-01

    In the background of building global energy internet, it has both theoretical and realistic significance for forecasting and analysing the ratio of electric energy to terminal energy consumption. This paper firstly analysed the influencing factors of the ratio of electric energy to terminal energy and then used combination method to forecast and analyse the global proportion of electric energy. And then, construct the cointegration model for the proportion of electric energy by using influence factor such as electricity price index, GDP, economic structure, energy use efficiency and total population level. At last, this paper got prediction map of the proportion of electric energy by using the combination-forecasting model based on multiple linear regression method, trend analysis method, and variance-covariance method. This map describes the development trend of the proportion of electric energy in 2017-2050 and the proportion of electric energy in 2050 was analysed in detail using scenario analysis.

  7. Sport and Sex-Specific Reporting Trends in the Epidemiology of Concussions Sustained by High School Athletes.

    Science.gov (United States)

    Schallmo, Michael S; Weiner, Joseph A; Hsu, Wellington K

    2017-08-02

    Approximately 300,000 U.S. adolescents sustain concussions annually while participating in organized athletics. This study aimed to track sex and sport-specific trends among high school sports-related concussions over time, to identify whether a particular sport predisposes athletes to a higher risk, and to assess whether traumatic brain injury law enactments have been successful in improving recognition. Injury data for academic years 2005 to 2014 were collected from annual reports generated by High School RIO (Reporting Information Online). The relative proportions of total estimated concussions to total estimated injuries were compared using an injury proportion ratio. The concussion rate was defined as the number of concussions per 10,000 athlete exposures (1 athlete participating in 1 practice or competition), with rates compared using a rate ratio. To evaluate the impact of legislation on sports-related concussions in this population, trends in concussion rates and proportions were analyzed before enactment (academic years 2005-2009) and after enactment (academic years 2010-2014). Between 2005-2006 and 2014-2015, a significant increase (p concussions for all sports combined, the overall concussion rate (rate ratio, 2.30 [95% confidence interval, 2.04 to 2.59]), and the overall proportion of concussions (injury proportion ratio, 2.68 [95% confidence interval, 2.66 to 2.70]) was seen. Based on the injury proportion ratio, during the 2014-2015 academic year, concussions were more common in girls' soccer than in any other sport (p concussion prevention and recognition measures continue to be emphasized in high school contact sports. The data in our study suggest that significant increases in the overall rate and proportion of reported concussions during the past decade could have been affected by traumatic brain injury legislation. To our knowledge, this is the first study to show that girls' soccer players may have an even greater risk of sustaining a concussion

  8. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  9. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  10. Evaluating Middle Years Students' Proportional Reasoning

    Science.gov (United States)

    Hilton, Annette; Dole, Shelley; Hilton, Geoff; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is a key aspect of numeracy that is not always developed naturally by students. Understanding the types of proportional reasoning that students apply to different problem types is a useful first step to identifying ways to support teachers and students to develop proportional reasoning in the classroom. This paper describes…

  11. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  12. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  13. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  14. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  15. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  16. Optical fusions and proportional syntheses

    Science.gov (United States)

    Albert-Vanel, Michel

    2002-06-01

    A tragic error is being made in the literature concerning matters of color when dealing with optical fusions. They are still considered to be of additive nature, whereas experience shows us somewhat different results. The goal of this presentation is to show that fusions are, in fact, of 'proportional' nature, tending to be additive or subtractive, depending on each individual case. Using the pointillist paintings done in the manner of Seurat, or the spinning discs experiment could highlight this intermediate sector of the proportional. So, let us try to examine more closely what occurs in fact, by reviewing additive, subtractive and proportional syntheses.

  17. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  18. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  19. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  20. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  1. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  2. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  3. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  4. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  5. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  6. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  7. Proportioning of U3O8 powder

    International Nuclear Information System (INIS)

    Cermak, V.; Markvart, M.; Novy, P.; Vanka, M.

    1989-01-01

    The tests are briefly described or proportioning U 3 O 8 powder of a granulometric grain size range of 0-160 μm using a vertical screw, a horizontal dual screw and a vibration dispenser with a view to proportioning very fine U 3 O 8 powder fractions produced in the oxidation of UO 2 fuel pellets. In the tests, the evenness of proportioning was assessed by the percentage value of the proportioning rate spread measured at one-minute intervals at a proportioning rate of 1-3 kg/h. In feeding the U 3 O 3 in a flame fluorator, it is advantageous to monitor the continuity of the powder column being proportioned and to assess it radiometrically by the value of the proportioning rate spread at very short intervals (0.1 s). (author). 10 figs., 1 tab., 12 refs

  8. Proportional Symbol Mapping in R

    Directory of Open Access Journals (Sweden)

    Susumu Tanimura

    2006-01-01

    Full Text Available Visualization of spatial data on a map aids not only in data exploration but also in communication to impart spatial conception or ideas to others. Although recent carto-graphic functions in R are rapidly becoming richer, proportional symbol mapping, which is one of the common mapping approaches, has not been packaged thus far. Based on the theories of proportional symbol mapping developed in cartography, the authors developed some functions for proportional symbol mapping using R, including mathematical and perceptual scaling. An example of these functions demonstrated the new expressive power and options available in R, particularly for the visualization of conceptual point data.

  9. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  10. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  11. Empirical analyses on the development trend of non-ferrous metal industry under China’s new normal

    Science.gov (United States)

    Li, C. X.; Liu, C. X.; Zhang, Q. L.

    2017-08-01

    The CGE model of Yunnan’s macro economy was constructed based on the input-output data of Yunnan in 2012, and the development trend of the non-ferrous metals industry (NMI) under the China’s new normal was simulated. In view of this, according to different expected economic growth, and optimized economic structure, the impact on development of Yunnan NMI was simulated. The results show that the NMI growth rate is expected to decline when the economic growth show a downward trend, but the change of the proportion is relatively small. Moreover, the structure in proportion was adjusted to realize the economic structure optimization, while the proportion of NMI in GDP will decline. In contrast, the biggest influence on the NMI is the change of economic structure. From the statistics of last two years, we can see that NMI is growing, and at the same time, its proportion is declining, which is consistent with the results of simulation. But the adjustment of economic structure will take a long time. It is need to improve the proportion of deep-processing industry, extend the industrial chain, enhance the value chain, so as to be made good use of resource advantage.

  12. The demographic transition influences variance in fitness and selection on height and BMI in rural Gambia.

    Science.gov (United States)

    Courtiol, Alexandre; Rickard, Ian J; Lummaa, Virpi; Prentice, Andrew M; Fulford, Anthony J C; Stearns, Stephen C

    2013-05-20

    Recent human history is marked by demographic transitions characterized by declines in mortality and fertility. By influencing the variance in those fitness components, demographic transitions can affect selection on other traits. Parallel to changes in selection triggered by demography per se, relationships between fitness and anthropometric traits are also expected to change due to modification of the environment. Here we explore for the first time these two main evolutionary consequences of demographic transitions using a unique data set containing survival, fertility, and anthropometric data for thousands of women in rural Gambia from 1956-2010. We show how the demographic transition influenced directional selection on height and body mass index (BMI). We observed a change in selection for both traits mediated by variation in fertility: selection initially favored short females with high BMI values but shifted across the demographic transition to favor tall females with low BMI values. We demonstrate that these differences resulted both from changes in fitness variance that shape the strength of selection and from shifts in selective pressures triggered by environmental changes. These results suggest that demographic and environmental trends encountered by current human populations worldwide are likely to modify, but not stop, natural selection in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  14. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  15. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  16. Impossibility Theorem in Proportional Representation Problem

    International Nuclear Information System (INIS)

    Karpov, Alexander

    2010-01-01

    The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).

  17. Gender Trends in Academic Radiology Publication in the United States Revisited.

    Science.gov (United States)

    O'Connor, Erin E; Chen, Pauline; Weston, Brian; Anderson, Redmond; Zeffiro, Timothy; Ahmed, Awad; Zeffiro, Thomas A

    2018-02-12

    Although substantial increases in publications by female academic radiologists have appeared over the last several decades, it is possible that the rate of increase is decreasing. We examined temporal trends in gender composition for full-time radiology faculty, radiology residents, and medical students over a 46-year period. We examined authorship gender trends to determine if the increases in female authorship seen since 1970 have been sustained in recent years and whether female radiologists continue to publish in proportion to their numbers in academic departments. Original articles for selected years in Radiology and in the American Journal of Roentgenology between 1970 and 2016 were examined to determine the gender of first, corresponding, and last authors. Generalized linear models evaluated (1) changes in proportions of female authorship over time and (2) associations between proportions of female authorship and female radiology faculty representation. While linear increases in first, corresponding, and senior authorships were observed for female radiologists from 1970 to 2000, the rate of increase in female first and corresponding authorships then changed, with the slope of the first author relationship decreasing from 0.81 to 0.34, corresponding to 47% fewer female first authors added per year. In contrast, the proportion of female last authorship continued to increase at the same rate. The proportion of female first authorship was linearly related to the proportion of female radiology faculty from 1970 to 2016. Annual increases in first author academic productivity of female radiologists have lessened in the past 16 years, possibly related to reductions in the growth of female radiology faculty and trainees. As mixed, compared to homogeneous gender, authorship teams are associated with more citations, efforts to encourage more women to pursue careers in academic radiology could benefit the radiology research community. Copyright © 2018 The Association

  18. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  19. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  20. Trends in marine fish catches at Pattani Fishery Port (1999-2003)

    OpenAIRE

    Wanchamai Karntanut; Premwadee Komontree; Phattrawan Tongkumchum

    2006-01-01

    This study aims to develop statistical models for forecasting the quantity of the various types of marine fish landed at Pattani Fishery Port, allowing for trend and seasonality, using official data during 1999-2003. The data comprise daily and monthly totals by weight for eight types of fish (mackerel, other food fish, squid, scads, trash fish, shrimp, lobster and crab). The statistical methods are one-way analysis of variance, multiple linear regression and time series forecasting using tre...

  1. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  2. Trends in teen driver licensure, driving patterns and crash involvement in the United States, 2006-2015.

    Science.gov (United States)

    Shults, Ruth A; Williams, Allan F

    2017-09-01

    The Monitoring the Future (MTF) survey provides nationally-representative annual estimates of licensure and driving patterns among U.S. teens. A previous study using MTF data reported substantial declines in the proportion of high school seniors that were licensed to drive and increases in the proportion of nondrivers following the recent U.S. economic recession. To explore whether licensure and driving patterns among U.S. high school seniors have rebounded in the post-recession years, we analyzed MTF licensure and driving data for the decade of 2006-2015. We also examined trends in teen driver involvement in fatal and nonfatal injury crashes for that decade using data from the Fatality Analysis Reporting System and National Automotive Sampling System General Estimates System, respectively. During 2006-2015, the proportion of high school seniors that reported having a driver's license declined by 9 percentage points (11%) from 81% to 72% and the proportion that did not drive during an average week increased by 8 percentage points (44%) from 18% to 26%. The annual proportion of black seniors that did not drive was consistently greater than twice the proportion of nondriving white seniors. Overall during the decade, 17- and 18-year-old drivers experienced large declines in fatal and nonfatal injury crashes, although crashes increased in both 2014 and 2015. The MTF data indicate that licensure and driving patterns among U.S. high school seniors have not rebounded since the economic recession. The recession had marked negative effects on teen employment opportunities, which likely influenced teen driving patterns. Possible explanations for the apparent discrepancies between the MTF data and the 2014 and 2015 increases in crashes are explored. MTF will continue to be an important resource for clarifying teen driving trends in relation to crash trends and informing strategies to improve teen driver safety. Published by Elsevier Ltd.

  3. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  4. Gender Trends in Radiation Oncology in the United States: A 30-Year Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Awad A. [Temple University School of Medicine, Philadelphia, Pennsylvania (United States); Egleston, Brian [Department of Biostatistics, Fox Chase Cancer Center, Philadelphia, Pennsylvania (United States); Holliday, Emma [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Eastwick, Gary [Temple University School of Medicine, Philadelphia, Pennsylvania (United States); Takita, Cristiane [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Jagsi, Reshma, E-mail: rjagsi@med.umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States)

    2014-01-01

    Purpose: Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women's participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. Methods and Materials: We examined the gender of first and senior US physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010, and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared with female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the past 3 decades, using Association of American Medical Colleges data. Results: The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Conclusions: Female first and senior authorship in the Red Journal has increased significantly, as has women's participation among full-time faculty, but women remain underrepresented among radiation oncology residents compared with their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology.

  5. GENDER TRENDS IN RADIATION ONCOLOGY IN THE UNITED STATES: A 30 YEAR ANALYSIS

    Science.gov (United States)

    Ahmed, Awad A; Egleston, Brian; Holliday, Emma; Eastwick, Gary; Takita, Cristiane; Jagsi, Reshma

    2013-01-01

    Purpose/Objective Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women’s participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. Methods and Materials We examined the gender of first and senior U.S. physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010 and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared to female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the last three decades, using AAMC data. Results The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Conclusion Female first and senior authorship in the Red Journal has increased significantly, as has women’s participation among full-time faculty, but women remain under-represented among radiation oncology residents as compared to their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology. PMID:24189127

  6. Gender Trends in Radiation Oncology in the United States: A 30-Year Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Awad A. [Temple University School of Medicine, Philadelphia, Pennsylvania (United States); Egleston, Brian [Department of Biostatistics, Fox Chase Cancer Center, Philadelphia, Pennsylvania (United States); Holliday, Emma [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Eastwick, Gary [Temple University School of Medicine, Philadelphia, Pennsylvania (United States); Takita, Cristiane [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Jagsi, Reshma [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States)

    2014-01-01

    Purpose: Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women's participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. Methods and Materials: We examined the gender of first and senior US physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010, and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared with female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the past 3 decades, using Association of American Medical Colleges data. Results: The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Conclusions: Female first and senior authorship in the Red Journal has increased significantly, as has women's participation among full-time faculty, but women remain underrepresented among radiation oncology residents compared with their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology.

  7. Gender trends in radiation oncology in the United States: a 30-year analysis.

    Science.gov (United States)

    Ahmed, Awad A; Egleston, Brian; Holliday, Emma; Eastwick, Gary; Takita, Cristiane; Jagsi, Reshma

    2014-01-01

    Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women's participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. We examined the gender of first and senior US physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010, and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared with female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the past 3 decades, using Association of American Medical Colleges data. The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Female first and senior authorship in the Red Journal has increased significantly, as has women's participation among full-time faculty, but women remain underrepresented among radiation oncology residents compared with their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Gender Trends in Radiation Oncology in the United States: A 30-Year Analysis

    International Nuclear Information System (INIS)

    Ahmed, Awad A.; Egleston, Brian; Holliday, Emma; Eastwick, Gary; Takita, Cristiane; Jagsi, Reshma

    2014-01-01

    Purpose: Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women's participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. Methods and Materials: We examined the gender of first and senior US physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010, and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared with female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the past 3 decades, using Association of American Medical Colleges data. Results: The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Conclusions: Female first and senior authorship in the Red Journal has increased significantly, as has women's participation among full-time faculty, but women remain underrepresented among radiation oncology residents compared with their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology

  9. Proportioning of light weight concrete

    DEFF Research Database (Denmark)

    Palmus, Lars

    1996-01-01

    Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory......Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory...

  10. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  11. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  12. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  13. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  14. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  15. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  16. Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys

    Science.gov (United States)

    Sauer, J.R.; Dolton, D.D.; Droege, S.

    1994-01-01

    The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.

  17. Trends in obstetric radiography, 1939-81

    International Nuclear Information System (INIS)

    Gilman, E.A.; Stewart, A.M.; Knox, E.G.; Kneale, G.W.

    1989-01-01

    Trends in obstetric radiography in Britian between 1939 and 1981 are reported. During this period the number of films needed to complete each examination decreased. The timing of x-rays also changed towards late pregnancy, and there was virtual elimination of all first trimester exposures following the introduction of the '10-day rule' in 1972. After the introduction of ultrasound, x-rays for twins decreased, x-rays for breech presentations remained unchanged and x-rays for foetal maturity increased. Despite repeated demonstrations of the cancer risk, the proportion of exposed infants was higher in 1970-81 (14%) than in 1960-9 (11%) or 1950-9 (12%). There were fewer x-rays in 1976-81 (12%) than in 1970-5 (15%), but it is possible that withdrawal of the '10-day rule' in 1985 will reverse this trend. (author)

  18. Development of multiwire proportional chambers

    CERN Multimedia

    Charpak, G

    1969-01-01

    It has happened quite often in the history of science that theoreticians, confronted with some major difficulty, have successfully gone back thirty years to look at ideas that had then been thrown overboard. But it is rare that experimentalists go back thirty years to look again at equipment which had become out-dated. This is what Charpak and his colleagues did to emerge with the 'multiwire proportional chamber' which has several new features making it a very useful addition to the armoury of particle detectors. In the 1930s, ion-chambers, Geiger- Muller counters and proportional counters, were vital pieces of equipment in nuclear physics research. Other types of detectors have since largely replaced them but now the proportional counter, in new array, is making a comeback.

  19. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  20. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  1. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  2. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  3. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  4. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  5. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  6. Much more medicine for the oldest old: trends in UK electronic clinical records.

    Science.gov (United States)

    Melzer, David; Tavakoly, Behrooz; Winder, Rachel E; Masoli, Jane A H; Henley, William E; Ble, Alessandro; Richards, Suzanne H

    2015-01-01

    the oldest old (85+) pose complex medical challenges. Both underdiagnosis and overdiagnosis are claimed in this group. to estimate diagnosis, prescribing and hospital admission prevalence from 2003/4 to 2011/12, to monitor trends in medicalisation. observational study of Clinical Practice Research Datalink (CPRD) electronic medical records from general practice populations (eligible; n = 27,109) with oversampling of the oldest old. we identified 18 common diseases and five geriatric syndromes (dizziness, incontinence, skin ulcers, falls and fractures) from Read codes. We counted medications prescribed ≥1 time in all quarters of studied years. there were major increases in recorded prevalence of most conditions in the 85+ group, especially chronic kidney disease (stages 3-5: prevalence trends were less marked. In the 85+ age group the proportion receiving no chronically prescribed medications fell from 29.6 to 13.6%, while the proportion on ≥3 rose from 44.6 to 66.2%. The proportion of 85+ year olds with ≥1 hospital admissions per year rose from 27.6 to 35.4%. there has been a dramatic increase in the medicalisation of the oldest old, evident in increased diagnosis (likely partly due to better record keeping) but also increased prescribing and hospitalisation. Diagnostic trends especially for chronic kidney disease may raise concerns about overdiagnosis. These findings provide new urgency to questions about the appropriateness of multiple diagnostic labelling. © The Author 2014. Published by Oxford University Press on behalf of the British Geriatrics Society.

  7. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  8. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  9. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  10. No Trend in the Intergenerational Transmission of Divorce

    Science.gov (United States)

    LI, JUI-CHUNG ALLEN; WU, LAWRENCE L.

    2008-01-01

    Previous studies on trends in the intergenerational transmission of divorce have produced mixed findings, with two studies (McLanahan and Bumpass 1988; Teachman 2002) reporting no trend in divorce transmission and one study (Wolfinger 1999) finding that divorce transmission has weakened substantially. Using a stratified Cox proportional hazard model, we analyze data from the National Survey of Families and Households and find no evidence for any trend in divorce transmission. To reconcile apparent differences in results, we note that the General Social Survey data used by Wolfinger lack information on marital duration, permitting analysis only for whether respondents have divorced by interview. As a result, an apparent decline in divorce transmission could be due to inadequate adjustments for the longer exposures to risk by earlier marriage cohorts, yielding a higher probability of divorce by interview for earlier cohorts relative to more recent cohorts even if divorce risks are identical across all marriage cohorts. We confirm this possibility by using a series of discrete-time hazard logistic regressions to investigate the sensitivity of estimates of trends in divorce transmission to different adjustments for exposure to risk. We conclude that there has been no trend in the intergenerational transmission of divorce. PMID:19110902

  11. Recent change of vegetation growth trend in China

    International Nuclear Information System (INIS)

    Peng Shushi; Fang Jingyun; Piao Shilong; Chen, Anping; Xu Liang; Myneni, Ranga B; Cao Chunxiang; Pinzon, Jorge E; Tucker, Compton J

    2011-01-01

    Using satellite-derived normalized difference vegetation index (NDVI) data, several previous studies have indicated that vegetation growth significantly increased in most areas of China during the period 1982–99. In this letter, we extended the study period to 2010. We found that at the national scale the growing season (April–October) NDVI significantly increased by 0.0007 yr −1 from 1982 to 2010, but the increasing trend in NDVI over the last decade decreased in comparison to that of the 1982–99 period. The trends in NDVI show significant seasonal and spatial variances. The increasing trend in April and May (AM) NDVI (0.0013 yr −1 ) is larger than those in June, July and August (JJA) (0.0003 yr −1 ) and September and October (SO) (0.0008 yr −1 ). This relatively small increasing trend of JJA NDVI during 1982–2010 compared with that during 1982–99 (0.0012 yr −1 ) (Piao et al 2003 J. Geophys. Res.—Atmos. 108 4401) implies a change in the JJA vegetation growth trend, which significantly turned from increasing (0.0039 yr −1 ) to slightly decreasing ( − 0.0002 yr −1 ) in 1988. Regarding the spatial pattern of changes in NDVI, the growing season NDVI increased (over 0.0020 yr −1 ) from 1982 to 2010 in southern China, while its change was close to zero in northern China, as a result of a significant changing trend reversal that occurred in the 1990s and early 2000s. In northern China, the growing season NDVI significantly increased before the 1990s as a result of warming and enhanced precipitation, but decreased after the 1990s due to drought stress strengthened by warming and reduced precipitation. Our results also show that the responses of vegetation growth to climate change vary across different seasons and ecosystems.

  12. Recent Change of Vegetation Growth Trend in China

    Science.gov (United States)

    Peng, Shushi; Chen, Anping; Xu, Liang; Cao, Chunxiang; Fang, Jingyun; Myneni, Ranga B.; Pinzon, Jorge E.; Tucker, COmpton J.; Piao, Shilong

    2011-01-01

    Using satellite-derived normalized difference vegetation index (NDVI) data, several previous studies have indicated that vegetation growth significantly increased in most areas of China during the period 1982-99. In this letter, we extended the study period to 2010. We found that at the national scale the growing season (April-October) NDVI significantly increased by 0.0007/yr from 1982 to 2010, but the increasing trend in NDVI over the last decade decreased in comparison to that of the 1982-99 period. The trends in NDVI show significant seasonal and spatial variances. The increasing trend in April and May (AM) NDVI (0.0013/yr is larger than those in June, July and August (JJA) (0.0003/yr) and September and October (SO) (0.0008/yr). This relatively small increasing trend of JJA NDVI during 1982-2010 compared with that during 1982-99 (0.0012/yr) (Piao et al 2003 J. Geophys. Res.-Atmos. 108 4401) implies a change in the JJA vegetation growth trend, which significantly turned from increasing (0.0039/yr) to slightly decreasing (0:0002/yr) in 1988. Regarding the spatial pattern of changes in NDVI, the growing season NDVI increased (over 0.0020/yr) from 1982 to 2010 in southern China, while its change was close to zero in northern China, as a result of a significant changing trend reversal that occurred in the 1990s and early 2000s. In northern China, the growing season NDVI significantly increased before the 1990s as a result of warming and enhanced precipitation, but decreased after the 1990s due to drought stress strengthened by warming and reduced precipitation. Our results also show that the responses of vegetation growth to climate change vary across different seasons and ecosystems.

  13. American families: trends and correlates.

    Science.gov (United States)

    Davanzo, J; Rahman, M O

    1993-01-01

    Discussion focused on the nature of the roles of the family, a review of the major demographic changes (marriage, cohabitation, nonfamily households, remarriage, fertility, teenage pregnancy, and female employment) affecting the American family in the past decades, and the nature of the impact on women, men, and children. There were four major trends identified: 1) increased proportions of children living in single-parent families due to high rates of divorce and increased childbearing outside of marriage; 2) increased proportions of adults in nontraditional living arrangements; 3) increased female labor force participation during all stages of the life cycle; and 4) decreased proportions of children and increased proportions of older people out of total population due to declining mortality and fertility rates. Family formation arises out of childbearing and childrearing roles, the need for companionship and emotional support, and the opportunities for specialization and trade, and the economies of scale. The costs of family living may include the potential for disagreement, conflict, loss of privacy, and time and money. There were a number of reasons identified for not maintaining traditional families consisting of a married couple with children. The trends were for later age at marriage: 24.4 years in 1992 for women, increased cohabitation (almost 50% cohabiting prior to first marriage in 1985-86), decreased number of married couple households, and increased number of adults in non-family households. The divorce rate has risen over the past 100 years with peaks in the 1970s; the reasons were identified as increased baby boomers and new marriages, increased labor participation of women, and changes in gender roles. The stabilization and slight decline in rates may be due to a natural leveling, the likelihood of greater stability within new marriages, and the aging of the baby boomers. An anticipated increase in divorce rates in the future was also justified

  14. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  15. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  16. Gender Trends in Radiology Authorship: A 35-Year Analysis.

    Science.gov (United States)

    Piper, Crystal L; Scheel, John R; Lee, Christoph I; Forman, Howard P

    2016-01-01

    The purpose of this study was to describe trends over time in female authorship in the radiology literature and to investigate the tendency of female first authors to publish with female senior authors. Data on the gender of academic physician authors based in the United States for all major articles published in three general radiology journals--Radiology, AJR, and Academic Radiology--were collected and analyzed for the years 1978, 1988, 1998, 2008, and 2013. Multivariate logistic regression was used to identify significant trends over time, and a chi-square test of independence was performed to determine significant relations between the genders of first and senior authors. The gender of 4182 of 4217 (99.17%) authors with MD degrees was determined. The proportion of original research articles published by women as first authors increased from 8.33% in 1978 to 32.35% in 2013 (p < 0.001). The proportion of original research articles with women as senior authors increased from 6.75% in 1978 to 21.90% in 2013 (p < 0.001). Female first and senior authorship increased significantly over time (first author, p < 0.001; senior author, p < 0.001). There was a statistically significant relation between the genders of first and senior authors of original research articles and guest editorials (p < 0.001). Over 35 years, there was a statistically significant upward linear trend of female physician participation in authorship of academic radiology literature. Female first authors were more likely to publish with female senior authors.

  17. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  18. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  19. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  20. Epidemiology and trends in non-fatal self-harm in three centres in England: 2000-2007.

    Science.gov (United States)

    Bergen, Helen; Hawton, Keith; Waters, Keith; Cooper, Jayne; Kapur, Navneet

    2010-12-01

    Self-harm is a common reason for presentation to a general hospital, with a strong association with suicide. Trends in self-harm are an important indicator of community psychopathology, with resource implications for health services and relevance to suicide prevention policy. Previous reports in the UK have come largely from single centres. To investigate trends in non-fatal self-harm in six general hospitals in three centres from the Multicentre Study of Self-harm in England, and to relate these to trends in suicide. Data on self-harm presentations to general hospital emergency departments in Oxford (one), Manchester (three) and Derby (two) were analysed over the 8-year period 1 January 2000 to 31 December 2007. Rates of self-harm declined significantly over 8 years for males in three centres (Oxford: -14%; Manchester: -25%; Derby: -18%) and females in two centres (Oxford: -2% (not significant); Manchester: -13%; Derby: -17%), in keeping with national trends in suicide. A decreasing proportion and number of episodes involved self-poisoning alone, and an increasing proportion and number involved other self-injury (e.g. hanging, jumping, traffic related). Episodes involving self-cutting alone showed a slight decrease in numbers over time. Trends in alcohol use at the time of self-harm and repetition within 1 year were stable. There were decreasing rates of non-fatal self-harm over the study period that paralleled trends in suicide in England. This was reflected mainly in a decline in emergency department presentations for self-poisoning.

  1. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  2. Adaptive bayesian analysis for binomial proportions

    CSIR Research Space (South Africa)

    Das, Sonali

    2008-10-01

    Full Text Available of testing the proportion of some trait. For example, say, we are interested to infer about the effectiveness of a certain intervention teaching strategy, by comparing proportion of ‘proficient’ teachers, before and after an intervention. The number...

  3. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  4. Icon arrays help younger children's proportional reasoning.

    Science.gov (United States)

    Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei

    2018-06-01

    We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.

  5. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  6. A comparative review of estimates of the proportion unchanged genes and the false discovery rate

    Directory of Open Access Journals (Sweden)

    Broberg Per

    2005-08-01

    Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information

  7. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  8. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  9. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  10. Electronics for proportional drift tubes

    International Nuclear Information System (INIS)

    Fremont, G.; Friend, B.; Mess, K.H.; Schmidt-Parzefall, W.; Tarle, J.C.; Verweij, H.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Geske, K.; Riege, H.; Schuett, J.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Semenov, Y.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration)

    1980-01-01

    An electronic system for the read-out of a large number of proportional drift tubes (16,000) has been designed. This system measures deposited charge and drift-time of the charge of a particle traversing a proportional drift tube. A second event can be accepted during the read-out of the system. Up to 40 typical events can be collected and buffered before a data transfer to a computer is necessary. (orig.)

  11. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  12. Cognitive and Metacognitive Aspects of Proportional Reasoning

    Science.gov (United States)

    Modestou, Modestina; Gagatsis, Athanasios

    2010-01-01

    In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…

  13. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  14. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  15. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  16. Trends in sickness absence in Denmark

    DEFF Research Database (Denmark)

    Johansen, Kristina; Bihrmann, Kristine; Mikkelsen, Sigurd

    2009-01-01

    's Authority, and the Labor Force Survey indicated a stable and largely unaltered pattern of sickness absence during the last 20 years. Findings from Statistics Denmark showed an increase in the cumulative incidence proportion from 6.6 to 7.5% among employed people between 2000 and 2007. CONCLUSION: Our data...... a linear regression analysis to analyze time trends in sickness absence based on datasets from the Danish Employers Confederation, the State Employer's Authority, the Labour Force Survey, and Statistics Denmark. RESULTS: The findings from the Confederation of Danish Employers, the State Employer...

  17. Relating arithmetical techniques of proportion to geometry

    DEFF Research Database (Denmark)

    Wijayanti, Dyana

    2015-01-01

    The purpose of this study is to investigate how textbooks introduce and treat the theme of proportion in geometry (similarity) and arithmetic (ratio and proportion), and how these themes are linked to each other in the books. To pursue this aim, we use the anthropological theory of the didactic....... Considering 6 common Indonesian textbooks in use, we describe how proportion is explained and appears in examples and exercises, using an explicit reference model of the mathematical organizations of both themes. We also identify how the proportion themes of the geometry and arithmetic domains are linked. Our...

  18. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  19. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  20. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  1. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  2. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  3. Proportional Reasoning and the Visually Impaired

    Science.gov (United States)

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  4. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  5. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  6. Precuneus proportions and cortical folding: A morphometric evaluation on a racially diverse human sample.

    Science.gov (United States)

    Bruner, Emiliano; Pereira-Pedro, Ana Sofia; Chen, Xu; Rilling, James K

    2017-05-01

    Recent analyses have suggested that the size and proportions of the precuneus are remarkably variable among adult humans, representing a major source of geometrical difference in midsagittal brain morphology. The same area also represents the main midsagittal brain difference between humans and chimpanzees, being more expanded in our species. Enlargement of the upper parietal surface is a specific feature of Homo sapiens, when compared with other fossil hominids, suggesting the involvement of these cortical areas in recent modern human evolution. Here, we provide a survey on midsagittal brain morphology by investigating whether precuneus size represents the largest component of variance within a larger and racially diverse sample of 265 adult humans. Additionally, we investigate the relationship between precuneus shape variation and folding patterns. Precuneus proportions are confirmed to be a major source of human brain variation even when racial variability is considered. Larger precuneus size is associated with additional precuneal gyri, generally in its anterior district. Spatial variation is most pronounced in the dorsal areas, with no apparent differences between hemispheres, between sexes, or among different racial groups. These dorsal areas integrate somatic and visual information together with the lateral elements of the parietal cortex, representing a crucial node for self-centered mental imagery. The histological basis and functional significance of this intra-specific variation in the upper precuneus remains to be evaluated. Copyright © 2017 Elsevier GmbH. All rights reserved.

  7. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  8. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  9. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  10. Analysis of trend in temperature and rainfall time series of an Indian arid region: comparative evaluation of salient techniques

    Science.gov (United States)

    Machiwal, Deepesh; Gupta, Ankit; Jha, Madan Kumar; Kamble, Trupti

    2018-04-01

    This study investigated trends in 35 years (1979-2013) temperature (maximum, Tmax and minimum, Tmin) and rainfall at annual and seasonal (pre-monsoon, monsoon, post-monsoon, and winter) scales for 31 grid points in a coastal arid region of India. Box-whisker plots of annual temperature and rainfall time series depict systematic spatial gradients. Trends were examined by applying eight tests, such as Kendall rank correlation (KRC), Spearman rank order correlation (SROC), Mann-Kendall (MK), four modified MK tests, and innovative trend analysis (ITA). Trend magnitudes were quantified by Sen's slope estimator, and a new method was adopted to assess the significance of linear trends in MK-test statistics. It was found that the significant serial correlation is prominent in the annual and post-monsoon Tmax and Tmin, and pre-monsoon Tmin. The KRC and MK tests yielded similar results in close resemblance with the SROC test. The performance of two modified MK tests considering variance-correction approaches was found superior to the KRC, MK, modified MK with pre-whitening, and ITA tests. The performance of original MK test is poor due to the presence of serial correlation, whereas the ITA method is over-sensitive in identifying trends. Significantly increasing trends are more prominent in Tmin than Tmax. Further, both the annual and monsoon rainfall time series have a significantly increasing trend of 9 mm year-1. The sequential significance of linear trend in MK test-statistics is very strong (R 2 ≥ 0.90) in the annual and pre-monsoon Tmin (90% grid points), and strong (R 2 ≥ 0.75) in monsoon Tmax (68% grid points), monsoon, post-monsoon, and winter Tmin (respectively 65, 55, and 48% grid points), as well as in the annual and monsoon rainfalls (respectively 68 and 61% grid points). Finally, this study recommends use of variance-corrected MK test for the precise identification of trends. It is emphasized that the rising Tmax may hamper crop growth due to enhanced

  11. Proportional gas scintillation detectors and their applications

    International Nuclear Information System (INIS)

    Petr, I.

    1978-01-01

    The principle is described of a gas proportional scintillation detector and its function. Dependence of Si(Li) and xenon proportional detectors energy resolution on the input window size is given. A typical design is shown of a xenon detector used for X-ray spetrometry at an energy of 277 eV to 5.898 keV and at a gas pressure of 98 to 270 kPa. Gas proportional scintillation detectors show considerable better energy resolution than common proportional counters and even better resolution than semiconductor Si(Li) detectors for low X radiation energies. For detection areas smaller than 25 mm 2 Si(Li) detectors show better resolution, especially for higher X radiation energies. For window areas 25 to 190 mm 2 both types of detectors are equal, for a window area exceeding 190 mm 2 the proportional scintillation detector has higher energy resolution. (B.S.)

  12. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  13. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  14. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  15. Trends in the dietary patterns and prevalence of obesity among Greenlandic school children

    DEFF Research Database (Denmark)

    Schnohr, C; Pedersen, J M; Alcón, M C G

    2004-01-01

    . CONCLUSION: Most of the observed trends are positive, with regard to intake of vegetables and sweets and consumption of soft drinks. The fact that a high proportion of schoolchildren consider themselves to be obese must be assumed to have a negative impact on the psychological well-being of this population......OBJECTIVE: The aim of the study was to examine the trends in the dietary patterns of selected food items and in the prevalence of self-perceived obesity in a population of Greenlandic schoolchildren. STUDY DESIGN: The study is based on three school surveys among Greenlandic schoolchildren, class 6......,057 and 2,010 pupils, respectively. RESULTS: The intake of vegetables has increased significantly since 1994, and the intake of fruits, sweets and soft drink has decreased significantly at a 5% level. An unchanged high proportion of schoolchildren report to be on a diet or consider themselves obese...

  16. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  17. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  18. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  19. PROPORTIONS AND HUMAN SCALE IN DAMASCENE COURTYARD HOUSES

    Directory of Open Access Journals (Sweden)

    M. Salim Ferwati

    2008-03-01

    Full Text Available Interior designers, architects, landscape architects, and even urban designers, agree that environment, as a form of non-verbal communication means, has a symbolic dimension to it. As for its aesthetic dimension, it seems that beauty is related to a certain proportion, partially and as a whole. Suitable proportion leaves a good impression upon the beholders, especially when it matches human proportion. That in fact was the underlining belief of LeCorbusier, according to which he developed his Modular concept. The study searches for a modular, or proportion, system that governs the design of Damascene traditional house. By geometrical and mathematical examinations of 28 traditional houses, it was found that a certain proportional relationship existed; however, these proportional relationships were not fixed ones. The study relied on analyzing the Iwan elevation as well as the inner courtyard proportion in relation to the building area. Charts, diagrams and tables were produced to summarize the results.

  20. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  1. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  2. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  3. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  4. Trends in complicated newborn hospital stays & costs, 2002-2009: implications for the future.

    Science.gov (United States)

    Trudnak Fowler, Tara; Fairbrother, Gerry; Owens, Pamela; Garro, Nicole; Pellegrini, Cynthia; Simpson, Lisa

    2014-01-01

    With the steady growth in Medicaid enrollment since the recent recession, concerns have been raised about care for newborns with complications. This paper uses all-payer administrative data from the Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS), to examine trends from 2002 through 2009 in complicated newborn hospital stays, and explores the relationship between expected sources of payment and reasons for hospitalizations. Trends in complicated newborn stays, expected sources of payment, costs, and length of stay were examined. A logistic regression was conducted to explore likely payer source for the most prevalent diagnoses in 2009. Complicated births and hospital discharges within 30 days of birth remained relatively constant between 2002 and 2009, but average costs per discharge increased substantially (p<.001 for trend). Most strikingly, over time, the proportion of complicated births billed to Medicaid increased, while the proportion paid by private payers decreased. Among complicated births, the most prevalent diagnoses were preterm birth/low birth weight (23%), respiratory distress (18%), and jaundice (10%). The top two diagnoses (41% of newborns) accounted for 61% of the aggregate cost. For infants with complications, those with Medicaid were more likely to be complicated due to preterm birth/low birth weight and respiratory distress, while those with private insurance were more likely to be complicated due to jaundice. State Medicaid programs are paying for an increasing proportion of births and costly complicated births. Policies to prevent common birth complications have the potential to reduce costs for public programs and improve birth outcomes.

  5. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  6. Nonparametric trend estimation in the presence of fractal noise: application to fMRI time-series analysis.

    Science.gov (United States)

    Afshinpour, Babak; Hossein-Zadeh, Gholam-Ali; Soltanian-Zadeh, Hamid

    2008-06-30

    Unknown low frequency fluctuations called "trend" are observed in noisy time-series measured for different applications. In some disciplines, they carry primary information while in other fields such as functional magnetic resonance imaging (fMRI) they carry nuisance effects. In all cases, however, it is necessary to estimate them accurately. In this paper, a method for estimating trend in the presence of fractal noise is proposed and applied to fMRI time-series. To this end, a partly linear model (PLM) is fitted to each time-series. The parametric and nonparametric parts of PLM are considered as contributions of hemodynamic response and trend, respectively. Using the whitening property of wavelet transform, the unknown components of the model are estimated in the wavelet domain. The results of the proposed method are compared to those of other parametric trend-removal approaches such as spline and polynomial models. It is shown that the proposed method improves activation detection and decreases variance of the estimated parameters relative to the other methods.

  7. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  8. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  9. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  10. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  11. [Trends in smoking in an urban population over recent decades].

    Science.gov (United States)

    Villalbí, Joan R; Bartoll, Xavier; Rodríguez-Sanz, Maica; Borrell, Carme

    2016-05-06

    The objective of this study is to describe the distribution of smoking in the population and to assess changes and trends over recent decades. Cross sectional study in a sample of the non-institutionalized resident population (n=3,509) in Barcelona (Catalonia, Spain) using data from persons over 14 years of age from the health survey of 2011, and assessing trends for 1983-2011 using previous surveys. Dependent variables are having ever been a smoker, having quit, being a current smoker, and smoking daily. Independent variables include sex, age, and time. Prevalence and proportions are estimated, stratifying or adjusting for age. The prevalence of daily smokers is 18.8% in 2011: 22.2% for men and 15.9% for women. The age groups with higher smoking prevalence are 25-34 years for men and 15-24 for women. From 1983 to 2011 the reduction among men has been intense, and for women the prevalence has been decreasing since the survey of 2000. Among smokers, the proportion of both genders who do not smoke daily has increased. The smoking epidemic over the last years shows promising trends. The data do not lend support to the hardening hypothesis for current smokers. Smokers are a shrinking minority, although to improve public health it would be desirable to speed the process of change. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  12. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  13. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  14. Secular trends in hip fractures worldwide: opposing trends East versus West.

    Science.gov (United States)

    Ballane, Ghada; Cauley, Jane A; Luckey, Marjorie M; Fuleihan, Ghada El-Hajj

    2014-08-01

    Despite wide variations in hip rates fractures worldwide, reasons for such differences are not clear. Furthermore, secular trends in the age-specific hip fracture rates are changing the world map of this devastating disease, with the highest rise projected to occur in developing countries. The aim of our investigation is to systematically characterize secular trends in hip fractures worldwide, examine new data for various ethnic groups in the United States, evidence for divergent temporal patterns, and investigate potential contributing factors for the observed change in their epidemiology. All studies retrieved through a complex Medline Ovid search between 1966 and 2013 were examined. For each selected study, we calculated the percent annual change in age-standardized hip fracture rates de-novo. Although occurring at different time points, trend breaks in hip fracture incidence occurred in most Western countries and Oceania. After a steep rise in age-adjusted rates in these regions, a decrease became evident sometimes between the mid-seventies and nineties, depending on the country. Conversely, the data is scarce in Asia and South America, with evidence for a continuous rise in hip fracture rates, with the exception of Hong-Kong and Taiwan that seem to follow Western trends. The etiologies of these secular patterns in both the developed and the developing countries have not been fully elucidated, but the impact of urbanization is at least one plausible explanation. Data presented here show close parallels between rising rates of urbanization and hip fractures across disparate geographic locations and cultures. Once the proportion of the urban population stabilized, hip fracture rates also stabilize or begin to decrease perhaps due to the influence of other factors such as birth cohort effects, changes in bone mineral density and BMI, osteoporosis medication use and/or lifestyle interventions such as smoking cessation, improvement in nutritional status and fall

  15. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  16. Mix Proportion Design of Asphalt Concrete

    Science.gov (United States)

    Wu, Xianhu; Gao, Lingling; Du, Shoujun

    2017-12-01

    Based on the gradation of AC and SMA, this paper designs a new type of anti slide mixture with two types of advantages. Chapter introduces the material selection, ratio of ore mixture ratio design calculation, and determine the optimal asphalt content test and proportioning design of asphalt concrete mix. This paper introduces the new technology of mix proportion.

  17. Ulnar variance: its relationship to ulnar foveal morphology and forearm kinematics.

    Science.gov (United States)

    Kataoka, Toshiyuki; Moritomo, Hisao; Omokawa, Shohei; Iida, Akio; Murase, Tsuyoshi; Sugamoto, Kazuomi

    2012-04-01

    It is unclear how individual differences in the anatomy of the distal ulna affect kinematics and pathology of the distal radioulnar joint. This study evaluated how ulnar variance relates to ulnar foveal morphology and the pronosupination axis of the forearm. We performed 3-dimensional computed tomography studies in vivo on 28 forearms in maximum supination and pronation to determine the anatomical center of the ulnar distal pole and the forearm pronosupination axis. We calculated the forearm pronosupination axis using a markerless bone registration technique, which determined the pronosupination center as the point where the axis emerges on the distal ulnar surface. We measured the depth of the anatomical center and classified it into 2 types: concave, with a depth of 0.8 mm or more, and flat, with a depth less than 0.8 mm. We examined whether ulnar variance correlated with foveal type and the distance between anatomical and pronosupination centers. A total of 18 cases had a concave-type fovea surrounded by the C-shaped articular facet of the distal pole, and 10 had a flat-type fovea with a flat surface without evident central depression. Ulnar variance of the flat type was 3.5 ± 1.2 mm, which was significantly greater than the 1.2 ± 1.1 mm of the concave type. Ulnar variance positively correlated with distance between the anatomical and pronosupination centers. Flat-type ulnar heads have a significantly greater ulnar variance than concave types. The pronosupination axis passes through the ulnar head more medially and farther from the anatomical center with increasing ulnar variance. This study suggests that ulnar variance is related in part to foveal morphology and pronosupination axis. This information provides a starting point for future studies investigating how foveal morphology relates to distal ulnar problems. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  18. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  19. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  20. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  1. Output Power Control of Wind Turbine Generator by Pitch Angle Control using Minimum Variance Control

    Science.gov (United States)

    Senjyu, Tomonobu; Sakamoto, Ryosei; Urasaki, Naomitsu; Higa, Hiroki; Uezato, Katsumi; Funabashi, Toshihisa

    In recent years, there have been problems such as exhaustion of fossil fuels, e. g., coal and oil, and environmental pollution resulting from consumption. Effective utilization of renewable energies such as wind energy is expected instead of the fossil fuel. Wind energy is not constant and windmill output is proportional to the cube of wind speed, which cause the generated power of wind turbine generators (WTGs) to fluctuate. In order to reduce fluctuating components, there is a method to control pitch angle of blades of the windmill. In this paper, output power leveling of wind turbine generator by pitch angle control using an adaptive control is proposed. A self-tuning regulator is used in adaptive control. The control input is determined by the minimum variance control. It is possible to compensate control input to alleviate generating power fluctuation with using proposed controller. The simulation results with using actual detailed model for wind power system show effectiveness of the proposed controller.

  2. Computational approaches to analogical reasoning current trends

    CERN Document Server

    Richard, Gilles

    2014-01-01

    Analogical reasoning is known as a powerful mode for drawing plausible conclusions and solving problems. It has been the topic of a huge number of works by philosophers, anthropologists, linguists, psychologists, and computer scientists. As such, it has been early studied in artificial intelligence, with a particular renewal of interest in the last decade. The present volume provides a structured view of current research trends on computational approaches to analogical reasoning. It starts with an overview of the field, with an extensive bibliography. The 14 collected contributions cover a large scope of issues. First, the use of analogical proportions and analogies is explained and discussed in various natural language processing problems, as well as in automated deduction. Then, different formal frameworks for handling analogies are presented, dealing with case-based reasoning, heuristic-driven theory projection, commonsense reasoning about incomplete rule bases, logical proportions induced by similarity an...

  3. Why do card issuers charge proportional fees?

    OpenAIRE

    Oz Shy; Zhu Wang

    2008-01-01

    This paper explains why payment card companies charge consumers and merchants fees which are proportional to the transaction values instead of charging a fixed per-transaction fee. Our theory shows that, even in the absence of any cost considerations, card companies earn much higher profit when they charge proportional fees. It is also shown that competition among merchants reduces card companies' gains from using proportional fees relative to a fixed per-transaction fee. Merchants are found ...

  4. Multi-scale analysis of teleconnection indices: climate noise and nonlinear trend analysis

    Directory of Open Access Journals (Sweden)

    C. Franzke

    2009-02-01

    Full Text Available The multi-scale nature and climate noise properties of teleconnection indices are examined by using the Empirical Mode Decomposition (EMD procedure. The EMD procedure allows for the analysis of non-stationary time series to extract physically meaningful intrinsic mode functions (IMF and nonlinear trends. The climatologically relevant monthly mean teleconnection indices of the North Atlantic Oscillation (NAO, the North Pacific index (NP and the Southern Annular Mode (SAM are analyzed.

    The significance of IMFs and trends are tested against the null hypothesis of climate noise. The analysis of surrogate monthly mean time series from a red noise process shows that the EMD procedure is effectively a dyadic filter bank and the IMFs (except the first IMF are nearly Gaussian distributed. The distribution of the variance contained in IMFs of an ensemble of AR(1 simulations is nearly χ2 distributed. To test the statistical significance of the IMFs of the teleconnection indices and their nonlinear trends we utilize an ensemble of corresponding monthly averaged AR(1 processes, which we refer to as climate noise. Our results indicate that most of the interannual and decadal variability of the analysed teleconnection indices cannot be distinguished from climate noise. The NP and SAM indices have significant nonlinear trends, while the NAO has no significant trend when tested against a climate noise hypothesis.

  5. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  6. Global Drought Proportional Economic Loss Risk Deciles

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Drought Proportional Economic Loss Risk Deciles is a 2.5 minute grid of drought hazard economic loss as proportions of Gross Domestic Product (GDP) per...

  7. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  8. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  9. Dietary trends in the Middle East and North Africa: an ecological study (1961 to 2007).

    Science.gov (United States)

    Golzarand, Mahdieh; Mirmiran, Parvin; Jessri, Mahsa; Toolabi, Karamollah; Mojarrad, Mehdi; Azizi, Fereidoun

    2012-10-01

    Middle Eastern and North African countries are undergoing nutrition transition, a transition which is associated with an increased burden of non-communicable diseases. This necessitates the evaluation of dietary patterns in these regions. The present study aimed to assess changes in dietary patterns in Middle Eastern and North African countries between 1961 and 2007. Availability of energy and fifteen main food items during 1961-2007 was examined using FAO food balance sheets from the FAOSTAT database. Fifteen countries including nine in the Middle East and six in North Africa were selected and the average availability of total energy and different food items in these regions were compared. Over the 47 years studied, energy and food availability (apart from animal fats and alcoholic beverages) has increased in the Middle East and North Africa. In both regions the proportion of energy derived from meat and vegetable oils has increased significantly while that from cereals decreased significantly. In addition, the proportion of energy from milk and dairy products and vegetables has shown an ascending trend in North Africa while the proportion of energy from fruits has shown a descending trend in the Middle East. The study results reveal an unfavourable trend towards a Westernized diet in the Middle East and, to a certain extent, in North Africa. Tailored nutritional education encouraging healthy eating for prevention of the burden of chronic diseases in these countries seems essential.

  10. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  11. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  12. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  13. Which Mixed-Member Proportional Electoral Formula Fits You Best? Assessing the Proportionality Principle of Positive Vote Transfer Systems

    DEFF Research Database (Denmark)

    Bochsler, Daniel

    2014-01-01

    Mixed-member proportional systems (MMP) are a family of electoral systems which combine district-based elections with a proportional seat allocation. Positive vote transfer systems belong to this family. This article explains why they might be better than their siblings, and examines under which ...

  14. Hydroxyl layer: trend of number density and intra-annual variability

    Science.gov (United States)

    Sonnemann, G. R.; Hartogh, P.; Berger, U.; Grygalashvyly, M.

    2015-06-01

    The layer of vibrationally excited hydroxyl (OH*) near the mesopause in Earth's atmosphere is widely used to derive the temperature at this height and to observe dynamical processes such as gravity waves. The concentration of OH* is controlled by the product of atomic hydrogen, with ozone creating a layer of enhanced concentration in the mesopause region. However, the basic influences on the OH* layer are atomic oxygen and temperature. The long-term monitoring of this layer provides information on a changing atmosphere. It is important to know which proportion of a trend results from anthropogenic impacts on the atmosphere and which proportion reflects natural variations. In a previous paper (Grygalashvyly et al., 2014), the trend of the height of the layer and the trend in temperature were investigated particularly in midlatitudes on the basis of our coupled dynamic and chemical transport model LIMA (Leibniz Institute Middle Atmosphere). In this paper we consider the trend for the number density between the years 1961 and 2009 and analyze the reason of the trends on a global scale. Further, we consider intra-annual variations. Temperature and wind have the strongest impacts on the trend. Surprisingly, the increase in greenhouse gases (GHGs) has no clear influence on the chemistry of OH*. The main reason for this lies in the fact that, in the production term of OH*, if atomic hydrogen increases due to increasing humidity of the middle atmosphere by methane oxidation, ozone decreases. The maximum of the OH* layer is found in the mesopause region and is very variable. The mesopause region is a very intricate domain marked by changeable dynamics and strong gradients of all chemically active minor constituents determining the OH* chemistry. The OH* concentration responds, in part, very sensitively to small changes in these parameters. The cause for this behavior is given by nonlinear reactions of the photochemical system being a nonlinear enforced chemical oscillator

  15. Hydroxyl layer: trend of number density and intra-annual variability

    Directory of Open Access Journals (Sweden)

    G. R. Sonnemann

    2015-06-01

    Full Text Available The layer of vibrationally excited hydroxyl (OH* near the mesopause in Earth's atmosphere is widely used to derive the temperature at this height and to observe dynamical processes such as gravity waves. The concentration of OH* is controlled by the product of atomic hydrogen, with ozone creating a layer of enhanced concentration in the mesopause region. However, the basic influences on the OH* layer are atomic oxygen and temperature. The long-term monitoring of this layer provides information on a changing atmosphere. It is important to know which proportion of a trend results from anthropogenic impacts on the atmosphere and which proportion reflects natural variations. In a previous paper (Grygalashvyly et al., 2014, the trend of the height of the layer and the trend in temperature were investigated particularly in midlatitudes on the basis of our coupled dynamic and chemical transport model LIMA (Leibniz Institute Middle Atmosphere. In this paper we consider the trend for the number density between the years 1961 and 2009 and analyze the reason of the trends on a global scale. Further, we consider intra-annual variations. Temperature and wind have the strongest impacts on the trend. Surprisingly, the increase in greenhouse gases (GHGs has no clear influence on the chemistry of OH*. The main reason for this lies in the fact that, in the production term of OH*, if atomic hydrogen increases due to increasing humidity of the middle atmosphere by methane oxidation, ozone decreases. The maximum of the OH* layer is found in the mesopause region and is very variable. The mesopause region is a very intricate domain marked by changeable dynamics and strong gradients of all chemically active minor constituents determining the OH* chemistry. The OH* concentration responds, in part, very sensitively to small changes in these parameters. The cause for this behavior is given by nonlinear reactions of the photochemical system being a nonlinear enforced

  16. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  17. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  18. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  19. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  20. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  1. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  2. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  3. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  4. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  5. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  6. Proportion congruency effects: Instructions may be enough

    Directory of Open Access Journals (Sweden)

    Olga eEntel

    2014-10-01

    Full Text Available Learning takes time, namely, one needs to be exposed to contingency relations between stimulus dimensions in order to learn, whereas intentional control can be recruited through task demands. Therefore showing that control can be recruited as a function of experimental instructions alone, that is, adapting the processing according to the instructions before the exposure to the task, can be taken as evidence for existence of control recruitment in the absence of learning. This was done by manipulating the information given at the outset of the experiment. In the first experiment, we manipulated list-level congruency proportion. Half of the participants were informed that most of the stimuli would be congruent, whereas the other half were informed that most of the stimuli would be incongruent. This held true for the stimuli in the second part of each experiment. In the first part, however, the proportion of the two stimulus types was equal. A proportion congruent effect was found in both parts of the experiment, but it was larger in the second part. In our second experiment, we manipulated the proportion of the stimuli within participants by applying an item-specific design. This was done by presenting some color words most often in their congruent color, and other color words in incongruent colors. Participants were informed about the exact word-color pairings in advance. Similar to Experiment 1, this held true only for the second experimental part. In contrast to our first experiment, informing participants in advance did not result in an item-specific proportion effect, which was observed only in the second part. Thus our results support the hypothesis that instructions may be enough to trigger list-level control, yet learning does contribute to the proportion congruent effect under such conditions. The item-level proportion effect is apparently caused by learning or at least it is moderated by it.

  7. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  8. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  9. Multiwire proportional chamber development

    Science.gov (United States)

    Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.

    1973-01-01

    The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.

  10. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  11. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  12. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  13. Statistical indicators and trends in juvenile delinquency in modern Russia

    Directory of Open Access Journals (Sweden)

    Yuzikhanova E.G.

    2014-12-01

    Full Text Available Statistics of juvenile delinquency in Russia for ten years, allowing to determine its current trends, is presented. It’s noted that earlier the proportion of juveniles among all criminals was about 11-12%. During the period from 2003 to 2013 the proportion of juveniles in the total number of identified offenders decreased to 6%. Despite the reduction in the number of crimes committed by this category of persons, for several years the largest criminal activity is maintained in the age group 16-17 years (70%. Smaller proportion is the age group 14-15 years, there’s a reduction in the number of committed crimes: from 49,300 in 2000 to 19,700 in 2013. Over the same period, the number of reported crimes committed by minors or with their complicity decreased almost three times. With all the ambiguity of attitude to the considered problem, the author defines the role of criminal law policy of the state in response to trends in juvenile crime taking into account its specificity, caused by the complex of interrelated factors related to age, social, psychological characteristics of juveniles as a special social group, the originality of their social status. The legislative novel is considered: the punishment in the form of arrest is not imposed on persons under the age of eighteen by the time of court verdict. It’s summarized that the problems of juvenile delinquency are only partly solved by the humanization of criminal law policy of the state in order to restore social justice, correct the convict and prevent new crimes commission.

  14. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  15. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  16. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  17. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  18. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  19. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  20. The Origins of Scintillator Non-Proportionality

    Science.gov (United States)

    Moses, W. W.; Bizarri, G. A.; Williams, R. T.; Payne, S. A.; Vasil'ev, A. N.; Singh, J.; Li, Q.; Grim, J. Q.; Choong, W.-S.

    2012-10-01

    Recent years have seen significant advances in both theoretically understanding and mathematically modeling the underlying causes of scintillator non-proportionality. The core cause is that the interaction of radiation with matter invariably leads to a non-uniform ionization density in the scintillator, coupled with the fact that the light yield depends on the ionization density. The mechanisms that lead to the luminescence dependence on ionization density are incompletely understood, but several important features have been identified, notably Auger-like processes (where two carriers of excitation interact with each other, causing one to de-excite non-radiatively), the inability of excitation carriers to recombine (caused either by trapping or physical separation), and the carrier mobility. This paper reviews the present understanding of the fundamental origins of scintillator non-proportionality, specifically the various theories that have been used to explain non-proportionality.

  1. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  2. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  3. A longitudinal study on dual-tasking effects on gait: cognitive change predicts gait variance in the elderly.

    Directory of Open Access Journals (Sweden)

    Rebecca K MacAulay

    Full Text Available Neuropsychological abilities have found to explain a large proportion of variance in objective measures of walking gait that predict both dementia and falling within the elderly. However, to this date there has been little research on the interplay between changes in these neuropsychological processes and walking gait overtime. To our knowledge, the present study is the first to investigate intra-individual changes in neurocognitive test performance and gait step time at two-time points across a one-year span. Neuropsychological test scores from 440 elderly individuals deemed cognitively normal at Year One were analyzed via repeated measures t-tests to assess for decline in cognitive performance at Year Two. 34 of these 440 individuals neuropsychological test performance significantly declined at Year Two; whereas the "non-decliners" displayed improved memory, working memory, attention/processing speed test performance. Neuropsychological test scores were also submitted to factor analysis at both time points for data reduction purposes and to assess the factor stability overtime. Results at Year One yielded a three-factor solution: Language/Memory, Executive Attention/Processing Speed, and Working Memory. Year Two's test scores also generated a three-factor solution (Working Memory, Language/Executive Attention/Processing Speed, and Memory. Notably, language measures loaded on Executive Attention/Processing Speed rather than on the Memory factor at Year Two. Hierarchal multiple regression revealed that both Executive Attention/Processing Speed and sex significantly predicted variance in dual task step time at both time points. Remarkably, in the "decliners", the magnitude of the contribution of the neuropsychological characteristics to gait variance significantly increased at Year Two. In summary, this study provides longitudinal evidence of the dynamic relationship between intra-individual cognitive change and its influence on dual task gait

  4. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  5. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  6. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  7. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  8. 16 CFR 240.9 - Proportionally equal terms.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Proportionally equal terms. 240.9 Section 240.9 Commercial Practices FEDERAL TRADE COMMISSION GUIDES AND TRADE PRACTICE RULES GUIDES FOR ADVERTISING ALLOWANCES AND OTHER MERCHANDISING PAYMENTS AND SERVICES § 240.9 Proportionally equal terms. (a...

  9. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  10. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    Science.gov (United States)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  11. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  12. Australian trampoline injury patterns and trends.

    Science.gov (United States)

    Ashby, Karen; Pointer, Sophie; Eager, David; Day, Lesley

    2015-10-01

    To examine national trampoline injury patterns and trends in the context of improved product safety standards and trampoline design modifications. Review of National Hospital Morbidity data. There were an average 1,737 trampoline injuries reported nationally each year from 2002 to 2011. Both injury frequency and rate grew. Statistically significant rate increases were observed among all age groups, although both are highest among children aged 5-9 years. From 2008/09 there is a possible decreasing trend among the 5-9 age group. Falls predominate and 81% of falls result in fracture. Non-fall injuries increased annually as a proportion of all hospitalised injury although they did not comprise more than 2.4% in any one year. History provides no evidence of an observable effect of voluntary Australian Standards for trampoline safety on population rates for trampoline injury. The major design modification--netted enclosures--could contribute to the risk of injury by leading parents to falsely believe that a netted enclosure eradicates the risk of injury. © 2015 Public Health Association of Australia.

  13. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  14. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  15. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  16. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  17. Secular trends in parent-reported television viewing among children in the United States, 2001-2012.

    Science.gov (United States)

    Loprinzi, P D; Davis, R E

    2016-03-01

    Examine trends in parent-reported television (TV) viewing among preschoolers (2-5 years) and children (6-11 years) between 2001 and 2012. Data from the 2001-2012 National Health and Nutrition Examination Survey (NHANES) were used. The analytic sample included 5724 preschoolers and 7104 children. Parent proxy of TV viewing at each of the six 2-year cycles was assessed. Statistically significant decreases in mean TV viewing between 2001 and 2012 were observed for preschoolers of nearly all gender, race-ethnicity and poverty combinations (exception of Mexican American boys), with the largest decrease occurring among non-Hispanic white boys (29% decrease; 2.24 h/day in 2001-2002 to 1.59 h/day in 2011-2012; P = .01). There was evidence of progressive decrease in mean TV viewing among children, but not to the extent that occurred among the preschool population. Across the six respective cycles for the entire preschool sample, the proportion watching <2 h/day of TV was: 34.9, 34.2, 43.9, 43.4, 39.1 and 49.2 (P(trend)  < .001). For children, the respective proportions were: 32.9, 25.2, 38.2, 36.5, 38.1 and 36.6 (P(trend)  = .01). Statistically significant decreases in mean TV viewing between 2001 and 2012 were observed for preschoolers and children. However, a relatively large proportion of parents report their children watching 2 or more hours/day of TV. © 2015 John Wiley & Sons Ltd.

  18. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  19. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  20. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  1. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  2. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  3. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  4. Trends in single women with malignancy of the uterine cervix in United States.

    Science.gov (United States)

    Machida, Hiroko; Blake, Erin A; Eckhardt, Sarah E; Takiuchi, Tsuyoshi; Grubbs, Brendan H; Mikami, Mikio; Roman, Lynda D; Matsuo, Koji

    2018-03-01

    To examine trends and characteristics of single women with malignancy of the uterine cervix. This is a retrospective observational study examining the United States population-based tumor registry (the Surveillance, Epidemiology, and End Results program). Time-specific trends in single marital status were examined in 3,294,208 women among 12 common female malignancies including 87,151 women with uterine cervical malignancy between 1973 and 2013. While the proportion of single women in the majority of malignancies increased during the study time, the proportion of single women with cervical malignancy significantly increased more than in other malignancies (29.3% in 2013 from 6.3% in 1973). There was a surge in the proportion of single women with cervical malignancy starting in the early 1990s, exhibiting the largest annual percentage rate change (APC) among all examined malignancies (1.8%; 95% confidence interval [CI]=1.6, 2.0; pwomen aged single women aged ≥40 years increased significantly during the time (APC, 2.7%; 95% CI=2.3, 3.2; psingle women with malignancy of the uterine cervix has significantly increased in the past 4 decades. This increase was most dramatic in single women aged ≥40 years. Improving screening strategies in single women aged ≥40 years may help reduce the incidence of this malignancy. Copyright © 2018. Asian Society of Gynecologic Oncology, Korean Society of Gynecologic Oncology

  5. Contingency proportion systematically influences contingency learning.

    Science.gov (United States)

    Forrin, Noah D; MacLeod, Colin M

    2018-01-01

    In the color-word contingency learning paradigm, each word appears more often in one color (high contingency) than in the other colors (low contingency). Shortly after beginning the task, color identification responses become faster on the high-contingency trials than on the low-contingency trials-the contingency learning effect. Across five groups, we varied the high-contingency proportion in 10% steps, from 80% to 40%. The size of the contingency learning effect was positively related to high-contingency proportion, with the effect disappearing when high contingency was reduced to 40%. At the two highest contingency proportions, the magnitude of the effect increased over trials, the pattern suggesting that there was an increasing cost for the low-contingency trials rather than an increasing benefit for the high-contingency trials. Overall, the results fit a modified version of Schmidt's (2013, Acta Psychologica, 142, 119-126) parallel episodic processing account in which prior trial instances are routinely retrieved from memory and influence current trial performance.

  6. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    International Nuclear Information System (INIS)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs

  7. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    Energy Technology Data Exchange (ETDEWEB)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs.

  8. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  9. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  10. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  11. Analysis of force variance for a continuous miner drum using the Design of Experiments method

    Energy Technology Data Exchange (ETDEWEB)

    S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)

    2006-06-15

    Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.

  12. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  13. Against proportional shortfall as a priority-setting principle.

    Science.gov (United States)

    Altmann, Samuel

    2018-05-01

    As the demand for healthcare rises, so does the need for priority setting in healthcare. In this paper, I consider a prominent priority-setting principle: proportional shortfall. My purpose is to argue that proportional shortfall, as a principle, should not be adopted. My key criticism is that proportional shortfall fails to consider past health.Proportional shortfall is justified as it supposedly balances concern for prospective health while still accounting for lifetime health, even though past health is deemed irrelevant. Accounting for this lifetime perspective means that the principle may indirectly consider past health by accounting for how far an individual is from achieving a complete, healthy life. I argue that proportional shortfall does not account for this lifetime perspective as it fails to incorporate the fair innings argument as originally claimed, undermining its purported justification.I go on to demonstrate that the case for ignoring past health is weak, and argue that past health is at least sometimes relevant for priority-setting decisions. Specifically, when an individual's past health has a direct impact on current or future health, and when one individual has enjoyed significantly more healthy life years than another.Finally, I demonstrate that by ignoring past illnesses, even those entirely unrelated to their current illness, proportional shortfall can lead to instances of double jeopardy, a highly problematic implication. These arguments give us reason to reject proportional shortfall. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  15. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  16. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  17. The divine proportion

    CERN Document Server

    Huntley, H E

    1970-01-01

    Using simple mathematical formulas, most as basic as Pythagoras's theorem and requiring only a very limited knowledge of mathematics, Professor Huntley explores the fascinating relationship between geometry and aesthetics. Poetry, patterns like Pascal's triangle, philosophy, psychology, music, and dozens of simple mathematical figures are enlisted to show that the ""divine proportion"" or ""golden ratio"" is a feature of geometry and analysis which awakes answering echoes in the human psyche. When we judge a work of art aesthetically satisfying, according to his formulation, we are making it c

  18. Pressure control valve using proportional electro-magnetic solenoid actuator

    International Nuclear Information System (INIS)

    Yun, So Nam; Ham, Young Bog; Park, Pyoung Won

    2006-01-01

    This paper presents an experimental characteristics of electro-hydraulic proportional pressure control valve. In this study, poppet and valve body which are assembled into the proportional solenoid were designed and manufactured. The constant force characteristics of proportional solenoid actuator in the control region should be independent of the plunger position in order to be used to control the valve position in the fluid flow control system. The stroke-force characteristics of the proportional solenoid actuator is determined by the shape (or parameters) of the control cone. In this paper, steady state and transient characteristics of the solenoid actuator for electro-hydraulic proportional valve are analyzed using finite element method and it is confirmed that the proportional solenoid actuator has a constant attraction force in the control region independently on the stroke position. The effects of the parameters such as control cone length, thickness and taper length are also discussed

  19. Temporal and Spatial Trend of Climate Variability in Vietnam

    OpenAIRE

    Duc Luong Nguyen

    2014-01-01

    Vietnam’s long coastline, geographic location, and diverse topography and climates contribute to its being one of the most hazard-prone countries of the Asia-Pacific region. Given that a high proportion of the country’s population and economic assets are located in coastal lowlands and deltas, Vietnam has been ranked among the five countries likely to be most affected by global climate change. This paper aims at providing a short overview on the temporal and spatial trends of climate variabil...

  20. The Effects of Diet on the Proportion of Intramuscular Fat in Human Muscle: A Systematic Review and Meta-analysis

    Directory of Open Access Journals (Sweden)

    Sara Ahmed

    2018-02-01

    Full Text Available BackgroundThere is an increasing trend in the consumption of poor-quality diets worldwide, contributing to the increase of non-communicable diseases. Diet directly influences physiological composition and subsequently physical health. Studies have shown that dietary macronutrient and energy content can influence the proportion of intramuscular fat (IMF, which mediates various metabolic and endocrine dysfunction. The purpose of this systematic review was to identify evidence in the literature assessing the association between different dietary interventions on the proportion of IMF in humans.MethodsThree medical databases were investigated (Medline, EMBASE, and Cochrane to identify studies assessing changes in IMF after dietary interventions. The primary outcome measure was the change in IMF proportions after a dietary intervention. The effects of high-fat, high-carbohydrate, low-calorie, and starvation diets were assessed qualitatively. A meta-analysis assessing the effect of high-fat diets was conducted. Follow-up sensitivity and subgroup analyses were also conducted.ResultsOne thousand eight hundred and sixty-six articles were identified for review. Of these articles, 13 were eligible for inclusion after a full screening. High-fat diets increased IMF proportions, standardized mean difference = 1.24 (95% confidence interval, 0.43–2.05 and a significant overall effect size (P = 0.003. Diets with an increased proportion of carbohydrates decreased IMF proportions; however, increasing caloric intake with carbohydrates increased IMF. Starvation diets increased IMF stores, and hypocaloric diets did not result in any IMF proportion changes.ConclusionThis systematic review suggests that high-fat diets and diets with caloric intake increased above the amount required to maintain BMI with carbohydrates, and short-term starvation diets are associated with increases in IMF content. Further studies are needed to assess the effects of macronutrient

  1. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    Science.gov (United States)

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  2. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  3. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  4. Trends and patterns of sexual behaviors among adolescents and adults aged 14 to 59 years, United States.

    Science.gov (United States)

    Liu, Gui; Hariri, Susan; Bradley, Heather; Gottlieb, Sami L; Leichliter, Jami S; Markowitz, Lauri E

    2015-01-01

    Evaluation of sexual behaviors is essential to better understand the epidemiology of sexually transmitted infections and their sequelae. The National Health and Nutrition Examination Surveys (NHANES) is an ongoing probability sample survey of the US population. Using NHANES sexual behavior data from 1999 to 2012, we performed the following: (1) trend analyses among adults aged 25 to 59 years by 10-year birth cohorts and (2) descriptive analyses among participants aged 14 to 24 years. Sex was defined as vaginal, anal, or oral sex. Among adults aged 25 to 59 years, median age at sexual initiation decreased between the 1940-1949 and 1980-1989 cohorts from 17.9 to 16.2 among females (P trend < 0.001) and from 17.1 to 16.1 among males (P trend < 0.001). Median lifetime partners increased between the 1940-1949 and 1970-1979 cohorts, from 2.6 to 5.3 among females (P trend < 0.001) and from 6.7 to 8.8 among males (P trend < 0.001). The percentage of females reporting ever having a same-sex partner increased from 5.2% to 9.3% between the 1940-1949 and 1970-1979 cohorts (P trend < 0.001). Among participants aged 14 to 24 years, the percentage having had sex increased with age, from 12.5% among females and 13.1% among males at age 14 years to more than 75% at age 19 years for both sexes. Among sexually experienced 14- to 19-year-olds, 45.2% of females and 55.0% of males had at least 3 lifetime partners; 39.4% of females and 48.6% of males had at least 2 partners in the past year. The proportion of females aged 20 to 24 years who reported ever having a same-sex partner was 14.9%. The proportion of participants aged 14-19 or 20-24 years reporting ever having sex did not differ by survey year from 1999 to 2012 for either males or females. Sexual behaviors changed with successive birth cohorts, with more pronounced changes among females. A substantial proportion of adolescents are sexually active and have multiple partners. These data reinforce existing recommendations for sexual

  5. India's Conditional Cash Transfer Programme (the JSY to Promote Institutional Birth: Is There an Association between Institutional Birth Proportion and Maternal Mortality?

    Directory of Open Access Journals (Sweden)

    Bharat Randive

    Full Text Available India accounts for 19% of global maternal deaths, three-quarters of which come from nine states. In 2005, India launched a conditional cash transfer (CCT programme, Janani Suraksha Yojana (JSY, to reduce maternal mortality ratio (MMR through promotion of institutional births. JSY is the largest CCT in the world. In the nine states with relatively lower socioeconomic levels, JSY provides a cash incentive to all women on birthing in health institution. The cash incentive is intended to reduce financial barriers to accessing institutional care for delivery. Increased institutional births are expected to reduce MMR. Thus, JSY is expected to (a increase institutional births and (b reduce MMR in states with high proportions of institutional births. We examine the association between (a service uptake, i.e., institutional birth proportions and (b health outcome, i.e., MMR.Data from Sample Registration Survey of India were analysed to describe trends in proportion of institutional births before (2005 and during (2006-2010 the implementation of the JSY. Data from Annual Health Survey (2010-2011 for all 284 districts in above- mentioned nine states were analysed to assess relationship between MMR and institutional births.Proportion of institutional births increased from a pre-programme average of 20% to 49% in 5 years (p<0.05. In bivariate analysis, proportion of institutional births had a small negative correlation with district MMR (r = -0.11.The multivariate regression model did not establish significant association between institutional birth proportions and MMR [CI: -0.10, 0.68].Our analysis confirmed that JSY succeeded in raising institutional births significantly. However, we were unable to detect a significant association between institutional birth proportion and MMR. This indicates that high institutional birth proportions that JSY has achieved are of themselves inadequate to reduce MMR. Other factors including improved quality of care at

  6. PEP quark search proportional chambers

    Energy Technology Data Exchange (ETDEWEB)

    Parker, S I; Harris, F; Karliner, I; Yount, D [Hawaii Univ., Honolulu (USA); Ely, R; Hamilton, R; Pun, T [California Univ., Berkeley (USA). Lawrence Berkeley Lab.; Guryn, W; Miller, D; Fries, R [Northwestern Univ., Evanston, IL (USA)

    1981-04-01

    Proportional chambers are used in the PEP Free Quark Search to identify and remove possible background sources such as particles traversing the edges of counters, to permit geometric corrections to the dE/dx and TOF information from the scintillator and Cerenkov counters, and to look for possible high cross section quarks. The present beam pipe has a thickness of 0.007 interaction lengths (lambdasub(i)) and is followed in both arms each with 45/sup 0/ <= theta <= 135/sup 0/, ..delta..phi=90/sup 0/ by 5 proportional chambers, each 0.0008 lambdasub(i) thick with 32 channels of pulse height readout, and by 3 thin scintillator planes, each 0.003 lambdasub(i) thick. Following this thin front end, each arm of the detector has 8 layers of scintillator (one with scintillating light pipes) interspersed with 4 proportional chambers and a layer of lucite Cerenkov counters. Both the calculated ion statistics and measurements using He-CH/sub 4/ gas in a test chamber indicate that the chamber efficiencies should be >98% for q=1/3. The Landau spread measured in the test was equal to that observed for normal q=1 traversals. One scintillator plane and thin chamber in each arm will have an extra set of ADC's with a wide gate bracketing the normal one so timing errors and tails of earlier pulses should not produce fake quarks.

  7. Long-term trends in alcohol policy attitudes in Norway.

    Science.gov (United States)

    Rossow, Ingeborg; Storvoll, Elisabet E

    2014-05-01

    The aim of this study was to describe trends in attitudes to alcohol control policies in Norway over a period of 50 years and to discuss how these trends relate to developments in alcohol policy. Survey data from 17 national population surveys, national statistics and previous publications were applied to describe trends in attitudes to alcohol control polices (access to alcohol and price) and changes in these policies over the period 1962 to 2012. From 1962 to 1999, an increasing proportion of the population reported that regulations on availability of alcohol were too strict and that alcohol prices were too high, whereas in the 2000s this trend was reversed and support for existing control policies increased. Although the pillars of Norwegian alcohol policy--high prices, restricted access and a state monopoly on retail sales-remained, control policies were gradually relaxed throughout the entire period. Relaxation of strict alcohol control policies in Norway in the first four decades were probably, in part, the result of increasingly liberal public opinion. The subsequent reversed trend in opinions with increasing support for control policies may be due to several factors, for example, consumer-oriented changes in the monopoly system, increased availability and affordability, increased awareness of alcohol-related harm and the effectiveness of control policies. Thus, the dynamics of policies and attitudes may well change over time. © 2013 Australasian Professional Society on Alcohol and other Drugs.

  8. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  9. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  10. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  11. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  12. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  13. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  14. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  15. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  16. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  17. Value for money or making the healthy choice: the impact of proportional pricing on consumers' portion size choices.

    Science.gov (United States)

    Vermeer, Willemijn M; Alting, Esther; Steenhuis, Ingrid H M; Seidell, Jacob C

    2010-02-01

    Large food portion sizes are determinants of a high caloric intake, especially if they have been made attractive through value size pricing (i.e. lower unit prices for large than for small portion sizes). The purpose of the two questionnaire studies that are reported in this article was to assess the impact of proportional pricing (i.e. removing beneficial prices for large sizes) on people's portion size choices of high caloric food and drink items. Both studies employed an experimental design with a proportional pricing condition and a value size pricing condition. Study 1 was conducted in a fast food restaurant (N = 150) and study 2 in a worksite cafeteria (N = 141). Three different food products (i.e. soft drink, chicken nuggets in study 1 and a hot meal in study 2) with corresponding prices were displayed on pictures in the questionnaire. Outcome measures were consumers' intended portion size choices. No main effects of pricing were found. However, confronted with proportional pricing a trend was found for overweight fast food restaurant visitors being more likely to choose small portion sizes of chicken nuggets (OR = 4.31, P = 0.07) and less likely to choose large soft drink sizes (OR = 0.07, P = 0.04). Among a general public, proportional pricing did not reduce consumers' size choices. However, pricing strategies can help overweight and obese consumers selecting appropriate portion sizes of soft drink and high caloric snacks. More research in realistic settings with actual behaviour as outcome measure is required.

  18. Restrictions and Proportionality

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2009-01-01

    The article discusses three central aspects of the freedoms under European Community law, namely 1) the prohibition against restrictions as an important extension of the prohibition against discrimination, 2) a prohibition against exit restrictions which is just as important as the prohibition...... against host country restrictions, but which is often not recognised to the same extent by national law, and 3) the importance of also identifying and recognising an exit restriction, so that it is possible to achieve the required test of appropriateness and proportionality in relation to the rule...

  19. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  20. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  1. Disease proportions attributable to environment

    Directory of Open Access Journals (Sweden)

    Vineis Paolo

    2007-11-01

    Full Text Available Abstract Population disease proportions attributable to various causal agents are popular as they present a simplified view of the contribution of each agent to the disease load. However they are only summary figures that may be easily misinterpreted or over-interpreted even when the causal link between an exposure and an effect is well established. This commentary discusses several issues surrounding the estimation of attributable proportions, particularly with reference to environmental causes of cancers, and critically examines two recently published papers. These issues encompass potential biases as well as the very definition of environment and of environmental agent. The latter aspect is not just a semantic question but carries implications for the focus of preventive actions, whether centred on the material and social environment or on single individuals.

  2. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  3. Proton-recoil proportional counter tests at TREAT

    International Nuclear Information System (INIS)

    Fink, C.L.; Eichholz, J.J.; Burrows, D.R.; DeVolpi, A.

    1979-01-01

    A methane filled proton-recoil proportional counter will be used as a fission neutron detector in the fast-neutron hodoscope. To provide meaningful fuel-motion information the proportional counter should have: a linear response over a wide range of reactor powers background ratio (the number of high energy neutrons detected must be maximized relative to low energy neutrons, and gamma ray sensitivity must be kept small); and a detector efficiency for fission neutrons above 1 MeV of approximately 1%. In addition, it is desirable that the detector and the associated amplifier/discriminator be capable of operating at counting rates in excess of 500 kHz. This paper reports on tests that were conducted on several proportional counters at the TREAT reactor

  4. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  5. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  6. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  7. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  8. Within-category variance and lexical tone discrimination in native and non-native speakers

    NARCIS (Netherlands)

    Hoffmann, C.W.G.; Sadakata, M.; Chen, A.; Desain, P.W.M.; McQueen, J.M.; Gussenhove, C.; Chen, Y.; Dediu, D.

    2014-01-01

    In this paper, we show how acoustic variance within lexical tones in disyllabic Mandarin Chinese pseudowords affects discrimination abilities in both native and non-native speakers of Mandarin Chinese. Within-category acoustic variance did not hinder native speakers in discriminating between lexical

  9. Proportional counter end effects eliminator

    International Nuclear Information System (INIS)

    Meekins, J.F.

    1976-01-01

    An improved gas-filled proportional counter which includes a resistor network connected between the anode and cathode at the ends of the counter in order to eliminate ''end effects'' is described. 3 Claims, 2 Drawing Figures

  10. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    Science.gov (United States)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  11. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  12. VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS

    OpenAIRE

    Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul

    2005-01-01

    Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...

  13. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  14. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  15. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  16. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  17. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  18. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  19. Motor equivalence and structure of variance: multi-muscle postural synergies in Parkinson's disease.

    Science.gov (United States)

    Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M; Latash, Mark L

    2017-07-01

    We explored posture-stabilizing multi-muscle synergies with two methods of analysis of multi-element, abundant systems: (1) Analysis of inter-cycle variance; and (2) Analysis of motor equivalence, both quantified within the framework of the uncontrolled manifold (UCM) hypothesis. Data collected in two earlier studies of patients with Parkinson's disease (PD) were re-analyzed. One study compared synergies in the space of muscle modes (muscle groups with parallel scaling of activation) during tasks performed by early-stage PD patients and controls. The other study explored the effects of dopaminergic medication on multi-muscle-mode synergies. Inter-cycle variance and absolute magnitude of the center of pressure displacement across consecutive cycles were quantified during voluntary whole-body sway within the UCM and orthogonal to the UCM space. The patients showed smaller indices of variance within the UCM and motor equivalence compared to controls. The indices were also smaller in the off-drug compared to on-drug condition. There were strong across-subject correlations between the inter-cycle variance within/orthogonal to the UCM and motor equivalent/non-motor equivalent displacements. This study has shown that, at least for cyclical tasks, analysis of variance and analysis of motor equivalence lead to metrics of stability that correlate with each other and show similar effects of disease and medication. These results show, for the first time, intimate links between indices of variance and motor equivalence. They suggest that analysis of motor equivalence, which requires only a handful of trials, could be used broadly in the field of motor disorders to analyze problems with action stability.

  20. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  1. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  2. Trends of teenage pregnancy in Brazil, 2000-2011.

    Science.gov (United States)

    Vaz, Raquel Ferreira; Monteiro, Denise Leite Maia; Rodrigues, Nádia Cristina Pinheiro

    2016-07-01

    To evaluate the frequency of teenage pregnancy in Brazil, from 2000 to 2011, in all five Brazilian macroregions and age groups (10-14 and 15-19 years), correlating it with the human development index (HDI). Descriptive epidemiological study, with cross-sectional design, performed by searching the database of the National Health System (Datasus), using information from the Information System (Sinasc). There was a decrease in the percentage of live births (LB) from teenage mothers (10-19 years) in Brazil (23.5 % in 2000 to 19.2 % in 2011). This reduction was observed in all Brazilian macroregions in the group of mothers aged 15 to 19 years. The number of LB increased by 5.0% among mothers aged 10-14 years (increase in the North and Northeast and decline in the other macroregions). The proportion of LB shows an inversely proportional trend to HDI score, with the Southeast having the highest HDI and the lowest proportion of LB to teenage mothers in the country. Brazil shows a decline in the percentage of LB to adolescent mothers, tending to be inversely related to HDI score. It is important to empower strategies to address the problem, so that teenage pregnancy is seen as a personal decision rather than the result of a lack of policies targeting adolescent health.

  3. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework

  4. The proportionate value of proportionality in palliative sedation.

    Science.gov (United States)

    Berger, Jeffrey T

    2014-01-01

    Proportionality, as it pertains to palliative sedation, is the notion that sedation should be induced at the lowest degree effective for symptom control, so that the patient's consciousness may be preserved. The pursuit of proportionality in palliative sedation is a widely accepted imperative advocated in position statements and guidelines on this treatment. The priority assigned to the pursuit of proportionality, and the extent to which it is relevant for patients who qualify for palliative sedation, have been overstated. Copyright 2014 The Journal of Clinical Ethics. All rights reserved.

  5. Trends in Mortality from Cerebrovascular and Hypertensive Diseases in Brazil Between 1980 and 2012

    Directory of Open Access Journals (Sweden)

    Paolo Blanco Villela

    2016-01-01

    Full Text Available Abstract Background: Cerebrovascular and hypertensive diseases are among the main causes of death worldwide. However, there are limited data about the trends of these diseases over the years. Objective: To evaluate the temporal trends in mortality rates and proportional mortality from cerebrovascular and hypertensive diseases according to sex and age in Brazil between 1980 and 2012. Methods: We evaluated the underlying causes of death between 1980 and 2012 in both sexes and by age groups for circulatory diseases (CD, cerebrovascular diseases (CBVD, and hypertensive diseases (HD. We also evaluated death due to all causes (AC, external causes (EC, and ill-defined causes of death (IDCD. Data on deaths and population were obtained from the Department of Information Technology of the Unified Health System (Departamento de Informática do Sistema Único de Saúde, DATASUS/MS. We estimated crude and standardized annual mortality rates per 100,000 inhabitants and percentages of proportional mortality rates. Results: With the exception of EC, the mortality rates per 100,000 inhabitants of all other diseases increased with age. The proportional mortality of CD, CBVD, and HD increased up to the age range of 60-69 years in men and 70-79 years in women, and reached a plateau in both sexes after that. The standardized rates of CD and CBVD declined in both sexes. However, the HD rates showed the opposite trend and increased mildly during the study period. Conclusion: Despite the decline in standardized mortality rates due to CD and CBVD, there was an increase in deaths due to HD, which could be related to factors associated with the completion of the death certificates, decline in IDCD rates, and increase in the prevalence of hypertension.

  6. Proportional hazards models of infrastructure system recovery

    International Nuclear Information System (INIS)

    Barker, Kash; Baroud, Hiba

    2014-01-01

    As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set

  7. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...

  8. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  9. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date...... by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis. RESULTS: We show how a proportional hazards model may be used to extrapolate disease risk...... estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata...

  10. India's Conditional Cash Transfer Programme (the JSY) to Promote Institutional Birth: Is There an Association between Institutional Birth Proportion and Maternal Mortality?

    Science.gov (United States)

    Randive, Bharat; Diwan, Vishal; De Costa, Ayesha

    2013-01-01

    India accounts for 19% of global maternal deaths, three-quarters of which come from nine states. In 2005, India launched a conditional cash transfer (CCT) programme, Janani Suraksha Yojana (JSY), to reduce maternal mortality ratio (MMR) through promotion of institutional births. JSY is the largest CCT in the world. In the nine states with relatively lower socioeconomic levels, JSY provides a cash incentive to all women on birthing in health institution. The cash incentive is intended to reduce financial barriers to accessing institutional care for delivery. Increased institutional births are expected to reduce MMR. Thus, JSY is expected to (a) increase institutional births and (b) reduce MMR in states with high proportions of institutional births. We examine the association between (a) service uptake, i.e., institutional birth proportions and (b) health outcome, i.e., MMR. Data from Sample Registration Survey of India were analysed to describe trends in proportion of institutional births before (2005) and during (2006-2010) the implementation of the JSY. Data from Annual Health Survey (2010-2011) for all 284 districts in above- mentioned nine states were analysed to assess relationship between MMR and institutional births. Proportion of institutional births increased from a pre-programme average of 20% to 49% in 5 years (phigh institutional birth proportions that JSY has achieved are of themselves inadequate to reduce MMR. Other factors including improved quality of care at institutions are required for intended effect.

  11. Modelling and analysis of piezoelectric cantilever energy harvester for different proof mass and material proportion

    Science.gov (United States)

    Shashank, R.; Harisha, S. K., Dr; Abhishek, M. C.

    2018-02-01

    Energy harvesting using ambient energy sources is one of the fast growing trends in the world, research and development in the area of energy harvesting is moving progressively to get maximum power output from the existing resources. The ambient sources of energy available in the nature are solar energy, wind energy, thermal energy, vibrational energy etc. out of these methods energy harvesting by vibrational energy sources gain more importance due to its nature of not getting influenced by any environmental parameters and its free availability at anytime and anywhere. The project mainly deals with validating the values of voltage and electrical power output of experimentally conducted energy harvester, varying the parameters of the energy harvester and analyse the effect of the parameters on the performance of the energy harvester and compare the results. The cantilever beam was designed, analysed and simulated using COMSOL multi-physics software. The energy harvester gives an electrical output voltage of the 2.75 volts at a natural frequency of 37.2 Hz and an electrical power of 29μW. Decreasing the percentage of the piezoelectric material and simultaneously increasing the percentage of polymer material (so that total percentage of proportion remains same) increases the electrical voltage and decreases the natural frequency of the beam linearly upto 3.9V and 28.847 Hz till the percentage proportion of the beam was 24% piezoelectric beam and 76% polymer beam when the percentage proportion increased to 26% and 74% natural frequency goes on decreases further but voltage suddenly drops to 2.8V. The voltage generated by energy harvester increases proportionally and reaches 3.7V until weight of the proof mass reaches 4 grams and further increase in the weight of the proof mass decreases the voltage generated by energy harvester. Thus the investigation conveys that the weight of the proof mass and the length of the cantilever beam should be optimised to obtain maximum

  12. Trends in U.S. adult chronic disability rates over time.

    Science.gov (United States)

    Iezzoni, Lisa I; Kurtz, Stephen G; Rao, Sowmya R

    2014-10-01

    Trends in the patterns and prevalence of chronic disability among U.S. residents carry important implications for public health and public policies across multiple societal sectors. To examine trends in U.S. adult population rates of chronic disability from 1998 to 2011 using 7 different disability measures and examining the implications of trends in population age, race and ethnicity, and body mass index (BMI). We used National Health Interview Survey data on civilian, non-institutionalized U.S. residents ages ≥ 18 from selected years between 1998 and 2011. We used self-reported information on functional impairments, activity/participation limitations, and expected duration to create 7 chronic disability measures. We used direct standardization to account for changes in age, race/ethnicity, and BMI distributions over time. Multivariable logistic regression models identified associations of disability with sociodemographic characteristics. Without adjustment, population rates of all 7 disabilities increased significantly (p disability types continued to show increased rates over time (p disability. If these trends continue, the numbers and proportions of U.S. residents with various disabilities will continue rising in coming years. In particular, the prevalence of movement difficulties and work limitations will increase. Furthermore, disability will remain strongly associated with low levels of education, employment, and income. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  14. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  15. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  16. Common cycles and common trends in the stock and oil markets: Evidence from more than 150 years of data

    International Nuclear Information System (INIS)

    Balcilar, Mehmet; Gupta, Rangan; Wohar, Mark E.

    2017-01-01

    This paper investigates the role of permanent and transitory shocks, within the framework of common cycles and common trends, in explaining stock and oil prices. We perform a multivariate variance decomposition analysis of monthly data on the West Texas Intermediate (WTI) oil price and the S&P 500. The dataset used in the study spans a long period of 150 years and therefore contains a rich history to examine both the short- and long-run comovement properties of oil and stock prices. Given that the oil and stock markets might comove both in the short- and long-run, it is of interest to see the relative impacts of transitory and permanent shocks on both variables. We find that (log) oil price and (log) S&P 500 share a common stochastic trend for our full sample of September 1859 to July 2015, but a common cycle only exists during the post-WW II period. Full and post-WW II samples have quite different common feature estimates in terms of the impact of permanent and transitory shocks as measured by the impulse responses and forecast error variance decompositions. We also find that in the short-run oil is driven mostly by cycles (transitory shocks) and stock market is mostly driven by permanent shocks. But, permanent shocks dominate in the long-run. - Highlights: • Role of permanent and transitory shocks analyzed for oil and stock markets • The framework of common cycles and common trends used over 1859 to 2015 • Common stochastic trend for full-sample and common cycle post-World War II • Stock market driven by permanent shock in short- and long-runs • Oil market driven by temporary (permanent) shocks in short-run (long-run)

  17. Updating trends in cutaneous cancers in south-east Belgium.

    Science.gov (United States)

    Uhoda, Isabelle; Quatresooz, Pascale; Fumal, Isabelle; Nikkels, Arjen F; Piérard-Franchimont, Claudine; Piérard, Gerald E

    2004-07-01

    From data collected in a dermatopathology laboratory, the ratios between the numbers of specific cancers represent good markers for identifying any epidemiological shift in their prevalence and incidence among the reference population. The objective of the present study was to assess the ratios of the annual incidence of skin cancers in the Mosan region and Ardennes of Belgium over the past 6 years, and to compare the data with previous similar evaluations. A total of 7,640 skin cancers were collected and compared with regard to age and gender. Changes in time show that the trend of the increase in incidence of malignant melanoma (MM) is more impressive than that of squamous cell carcinoma (SCC) and basal cell carcinoma (BCC). The age distribution of BCC and SCC confirms the increasing risk with ageing. By contrast, there is a steady decrease over the past decade in the mean age for MM, teenagers and young adults now form an expanding proportion of MM patients. There is an ongoing trend in diagnosing an increased number of skin cancers in our laboratory. This trend is particularly obvious for MM affecting young adults.

  18. Trends in diabetes mellitus mortality in Puerto Rico: 1980-1997.

    Science.gov (United States)

    Pérez-Perdomo, R; Pérez-Cardona, C M; Suárez-Pérez, E L

    2001-03-01

    To determine the characteristics and trends of diabetes mortality among the Puerto Rican population from 1980 through 1997. Death certificates for Puerto Rican residents whose underlying cause of death was diabetes mellitus (ICD-9-250.0) were reviewed, and sociodemographic information was abstracted. The proportion mortality ratio (PMR) and 95% confidence intervals were calculated by gender, age group, educational level and period of time. Trend analysis in mortality was performed using a Poisson regression model. A total of 26,193 deaths (5.8%) were primarily attributed to diabetes mellitus in the study period. Females accounted for 55.8% of all diabetes related deaths. Diabetes accounted for a higher proportion of deaths among persons aged 60-64 years (8.14%), persons aged 65-74 (8.12%), females (7.73%) and those with 1-6 years of education (7.08%). The PMR steadily increased from 4.55% in the 1980-85 period to 6.91% in the 1992-97 period. There was a higher mortality in male diabetic subjects aged or = 75) was examined, males had a higher mortality between 1986 and 1997, whereas females had a slightly higher rate between 1980 and 1985. Our results indicate that diabetes mortality has been markedly increasing in the Puerto Rican population, primarily in persons aged 65 years or more. Further analysis is needed to evaluate the determinants of mortality in diabetes.

  19. Excluded-Mean-Variance Neural Decision Analyzer for Qualitative Group Decision Making

    Directory of Open Access Journals (Sweden)

    Ki-Young Song

    2012-01-01

    Full Text Available Many qualitative group decisions in professional fields such as law, engineering, economics, psychology, and medicine that appear to be crisp and certain are in reality shrouded in fuzziness as a result of uncertain environments and the nature of human cognition within which the group decisions are made. In this paper we introduce an innovative approach to group decision making in uncertain situations by using a mean-variance neural approach. The key idea of this proposed approach is to compute the excluded mean of individual evaluations and weight it by applying a variance influence function (VIF; this process of weighting the excluded mean by VIF provides an improved result in the group decision making. In this paper, a case study with the proposed excluded-mean-variance approach is also presented. The results of this case study indicate that this proposed approach can improve the effectiveness of qualitative decision making by providing the decision maker with a new cognitive tool to assist in the reasoning process.

  20. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  1. Empirical single sample quantification of bias and variance in Q-ball imaging.

    Science.gov (United States)

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  2. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2017-01-01

    The discounted stock price under the Constant Elasticity of Variance model is not a martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the price for which put-call parity holds and the price that represents the lowest cost of

  3. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2014-01-01

    The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

  4. Changing trends of indications and rate of cesarean section

    International Nuclear Information System (INIS)

    Ambreen, A.; Intsar, A.; Khurshid, S.

    2013-01-01

    Background: There is a trend of rising caesarean section rate over the past decade affecting the economy of the country. This continually rising caesarean section rate is of increasing concern to the health professionals and the public This study was designed to assess the indications and trends of caesarean sections done over a five year period from 2007 to 2011. Methods: This was a retrospective observational study done over a five year period in the department of Obstetrics and Gynaecology Fatima Memorial Hospital Lahore from 2007 to 2011. Results: Total no of deliveries from 2007 to 2011 were 30741 out of which caesarean sections performed were 13820.The caesarean birth rate increased from 41% to-48%. The indications varied a little in case of mal-presentation and eclampsia. APH and IUGR has risen a little from (from 2.56% to 2.6% and 1.83% to 2.34% respectively). But proportion of repeat caesarean section increased by 25.99% to 31.45% and that of presumed fetal distress increased from 8% to 15% respectively. Recently the indication of maternal choice is emerging with incidence of 0.8% in our study. The proportion has fallen in prolonged labour due to cervical dystocia from 17% to 14% and in obstructed labour from 4.6% to 3%. Conclusion: Individualization of every case, meticulous clinical examination, use of intrapartum fetomaternal survellience along with regular use of partograrm would limit the practice of undue caesarean sections. (author)

  5. Subjective Proportions: 18th-Century Interpretations of Paestum’s ‘Disproportion’

    Directory of Open Access Journals (Sweden)

    Sigrid de Jong

    2016-02-01

    Full Text Available When 18th-century travellers saw the Doric temples of Paestum in Southern Italy with their own eyes, they observed for the first time true examples of the proportions of archaic Greek architecture. Contrary to the Roman proportional systems, the Greek ones had been largely unavailable to architects until then. With the rediscovery of Paestum, conveniently located south of Naples and not in far away Greece, the secret of Greek proportions was no more. Architects were able to precisely measure the temples and wrote many accounts about their primitive forms and proportions. But what did architects mean exactly when describing the proportions as primitive? What kinds of reflections did these proportions provoke? This article treats proportions as aesthetics, or as visible proportions, not as a numerical system. The discourse on proportions changed in this period, giving more weight to their cultural and historical meaning. The writings by such architects as Soane, Wilkins, and Labrouste demonstrate how Paestum functioned as a laboratory to unveil the secret of primitive proportions, and how, with the different meanings architects attached to them, it enlarged and renewed the debate on proportions.

  6. The VIX, the Variance Premium, and Expected Returns

    DEFF Research Database (Denmark)

    Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo

    2018-01-01

    . These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...

  7. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  8. Why do lifespan variability trends for the young and old diverge? A perturbation analysis

    Directory of Open Access Journals (Sweden)

    Michal Engelman

    2014-05-01

    Full Text Available Background: Variation in lifespan has followed strikingly different trends for the young and old: while overall lifespan variability has decreased as life expectancy at birth has risen, the variability conditional on survival to older ages has increased. These diverging trends reflect changes in the underlying demographic parameters determining age-specific mortality. Objective: We ask why the variation in the adult ages at death has followed a different trend than the variation at younger ages, and aim to explain the diverging patterns in terms of historical changes in the age schedule of mortality. Methods: Using simulations, we show that the empirical trends in lifespan variation are well characterized using the Siler model, which describes the mortality hazard across the full lifespan using functions representing early-life, later-life, and background mortality. We then obtain maximum likelihood estimates of the Siler parameters over time. Finally, we express lifespan variation in terms of a Markov chain model, and apply matrix calculus perturbation analysis to compute the sensitivity of age-specific lifespan variance trends to the changing Siler model parameters. Results: Our analysis produces a detailed quantification of the impact of changing demographic parameters on the pattern of lifespan variability at all ages, highlighting the impact of declining childhood mortality on the reduction of lifespan variability and the impact of improved survival in adulthood on the rising variability of lifespans at older ages. Conclusions: These findings provide insight into the dynamic relationship between the age pattern of survival improvements and time trends in lifespan variability.

  9. Detectability of migrating raptors and its effect on bias and precision of trend estimates

    Directory of Open Access Journals (Sweden)

    Eric G. Nolte

    2016-12-01

    Full Text Available Annual counts of migrating raptors at fixed observation points are a widespread practice, and changes in numbers counted over time, adjusted for survey effort, are commonly used as indices of trends in population size. Unmodeled year-to-year variation in detectability may introduce bias, reduce precision of trend estimates, and reduce power to detect trends. We conducted dependent double-observer surveys at the annual fall raptor migration count at Lucky Peak, Idaho, in 2009 and 2010 and applied Huggins closed-capture removal models and information-theoretic model selection to determine the relative importance of factors affecting detectability. The most parsimonious model included effects of observer team identity, distance, species, and day of the season. We then simulated 30 years of counts with heterogeneous individual detectability, a population decline (λ = 0.964, and unexplained random variation in the number of available birds. Imperfect detectability did not bias trend estimation, and increased the time required to achieve 80% power by less than 11%. Results suggested that availability is a greater source of variance in annual counts than detectability; thus, efforts to account for availability would improve the monitoring value of migration counts. According to our models, long-term trends in observer efficiency or migratory flight distance may introduce substantial bias to trend estimates. Estimating detectability with a novel count protocol like our double-observer method is just one potential means of controlling such effects. The traditional approach of modeling the effects of covariates and adjusting the index may also be effective if ancillary data is collected consistently.

  10. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  11. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  12. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  13. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2015-01-01

    Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

  14. Multiwire proportional chamber for Moessbauer spectroscopy: development and results

    International Nuclear Information System (INIS)

    Costa, M.S. da.

    1985-12-01

    A new Multiwere proportional Chamber designed for Moessbauer Spectroscopy is presented. This detector allows transmission backscattering experiments using either photons or electrons. The Moessbauer data acquisition system, partially developed for this work is described. A simple method for determining the frontier between true proportional and semi-proportional regions of operation in gaseous detectors is proposed. The study of the tertiary gas mixture He-Ar-CH 4 leads to a straight forward way of energy calibration of the electron spectra. Moessbauer spectra using Fe-57 source are presented. In particular those obtained with backsattered electrons show the feasibility of depth selective analysis with gaseous proportional counters. (author) [pt

  15. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    Purpose: To analyze treatment delivery variances for 3-D conformal therapy performed at various levels of treatment delivery automation, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system. Materials and Methods: All external beam treatments performed in our department during six months of 1996 were analyzed to study treatment delivery variances versus treatment complexity. Treatments for 505 patients (40,641 individual treatment ports) on four treatment machines were studied. All treatment variances noted by treatment therapists or quality assurance reviews (39 in all) were analyzed. Machines 'M1' (CLinac (6(100))) and 'M2' (CLinac 1800) were operated in a standard manual setup mode, with no record and verify system (R/V). Machines 'M3' (CLinac 2100CD/MLC) and ''M4'' (MM50 racetrack microtron system with MLC) treated patients under the control of a computer-controlled conformal radiotherapy system (CCRS) which 1) downloads the treatment delivery plan from the planning system, 2) performs some (or all) of the machine set-up and treatment delivery for each field, 3) monitors treatment delivery, 4) records all treatment parameters, and 5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3, so it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments (ports), non-axial and non-coplanar plans, multi-segment intensity modulation, and pseudo-isocentric treatments (and other plans with computer-controlled table motions). Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines, so this analysis

  16. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  17. Folic acid supplements to prevent neural tube defects: trends in East of Ireland 1996-2002.

    LENUS (Irish Health Repository)

    Ward, M

    2004-10-01

    Promotion of folic acid to prevent neural Tube Defects (NTD) has been ongoing for ten years in Ireland, without a concomitant reduction in the total birth prevalence of NTD. The effectiveness of folic acid promotion as the sole means of primary prevention of NTD is therefore questionable. We examined trends in folic acid knowledge and peri-conceptional use from 1996-2002 with the aim of assessing the value of this approach. From 1996-2002, 300 women attending ante-natal clinics in Dublin hospitals annually were surveyed regarding their knowledge and use of folic acid. During the period the proportion who had heard of folic acid rose from 54% to 94% between 1996 and 2002 (c2 test for trend: p<0.001). Knowledge that folic acid can prevent NTD also rose from 21% to 66% (c2 test for trend: p<0.001). Although the proportion who took folic acid during pregnancy increased from 14% to 83% from 1996 to 2002 (c2 test for trend: p<0.001), peri-conceptional intake did not rise above 24% in any year. There is a high awareness of folic acid and its relation to NTD, which is not matched by peri-conceptional uptake. The main barrier to peri-conceptional uptake is the lack of pregnancy planning. To date promotional campaigns appear to have been ineffective in reducing the prevalence of NTD in Ireland. Consequently, fortification of staple foodstuffs is the only practical and reliable means of primary prevention of NTD.

  18. The principle of proportionality and European contract law

    NARCIS (Netherlands)

    Cauffman, C.; Rutgers, J.; Sirena, P.

    2015-01-01

    The paper investigates the role of the principle of proportionality within contract law, in balancing the rights and obligations of the contracting parties. It illustrates that the principle of proportionality is one of the general principles which govern contractual relations, and as such it is an

  19. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  20. Assessment of the relative merits of a few methods to detect evolutionary trends.

    Science.gov (United States)

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error