Directory of Open Access Journals (Sweden)
Charles I Nkeki
2014-11-01
Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.
Mäki, K.; Groen, A.F.; Liinamo, A.E.; Ojala, M.
2002-01-01
The aims of this study were to assess genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations. The influence of time-dependent fixed effects in the model when estimating the genetic trends was also studied. Official hip and elbow dysplasia screening
International Nuclear Information System (INIS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-01-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank
2012-09-01
Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for
Dong, Nianbo; Reinke, Wendy M; Herman, Keith C; Bradshaw, Catherine P; Murray, Desiree W
2016-09-30
There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). We calculated the empirical reference values on critical research design parameters associated with statistical power for children's social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R 2 ). Children from kindergarten to Grade 5 in the samples from four large CRTs evaluating the effectiveness of two classroom- and two school-level preventive interventions. Teacher ratings of students' social and behavioral outcomes using the Teacher Observation of Classroom Adaptation-Checklist and the Social Competence Scale-Teacher. Two types of effect size benchmarks were calculated: (1) normative expectations for change and (2) policy-relevant demographic performance gaps. The ICCs and R 2 were calculated using two-level hierarchical linear modeling (HLM), where students are nested within schools, and three-level HLM, where students were nested within classrooms, and classrooms were nested within schools. Comprehensive tables of benchmarks and ICC values are provided to inform prevention researchers in interpreting the effect size of interventions and conduct power analyses for designing CRTs of children's social and behavioral outcomes. The discussion also provides a demonstration for how to use the parameter reference values provided in this article to calculate the sample size for two- and three-level CRTs designs. © The Author(s) 2016.
DEFF Research Database (Denmark)
Morken, N.H.; Vogel, I.; Kallen, K.
2008-01-01
BACKGROUND: International comparison and time trend surveillance of preterm delivery rates is complex. New techniques that could facilitate interpretation of such rates are needed. METHODS: We studied all live births and stillbirths (>or= 28 weeks gestation) registered in the medical birth...
Proportionality lost - proportionality regained?
DEFF Research Database (Denmark)
Werlauff, Erik
2010-01-01
In recent years, the European Court of Justice (the ECJ) seems to have accepted restrictions on the freedom of establishment and other basic freedoms, despite the fact that a more thorough proportionality test would have revealed that the restriction in question did not pass the 'rule of reason' ...
Energy Technology Data Exchange (ETDEWEB)
Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.
2014-06-01
The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)
De-trending of wind speed variance based on first-order and second-order statistical moments only
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Hansen, Kurt Schaldemose
2014-01-01
The lack of efficient methods for de-trending of wind speed resource data may lead to erroneous wind turbine fatigue and ultimate load predictions. The present paper presents two models, which quantify the effect of an assumed linear trend on wind speed standard deviations as based on available...... statistical data only. The first model is a pure time series analysis approach, which quantifies the effect of non-stationary characteristics of ensemble mean wind speeds on the estimated wind speed standard deviations as based on mean wind speed statistics only. This model is applicable to statistics...... of arbitrary types of time series. The second model uses the full set of information and includes thus additionally observed wind speed standard deviations to estimate the effect of ensemble mean non-stationarities on wind speed standard deviations. This model takes advantage of a simple physical relationship...
DEFF Research Database (Denmark)
Dole, Shelley; Hilton, Annette; Hilton, Geoff
2015-01-01
Proportional reasoning is widely acknowledged as a key to success in school mathematics, yet students’ continual difficulties with proportion-related tasks are well documented. This paper draws on a large research study that aimed to support 4th to 9th grade teachers to design and implement tasks...
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
MCNP variance reduction overview
International Nuclear Information System (INIS)
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Directory of Open Access Journals (Sweden)
João Yunes
1974-06-01
phisiographical regions. During the last 30 years the reduction of general mortality in Brazil was 47.5% and the biggest fall was noticed in the West-Center Region. In the last 10 years the rise of the rate in all Regions was observed starting in different years. This fact is due to the increase of infant mortality. When one compares general mortality in Brazil with that of developed countries, it can be considered high since 42% of the population in 14 years old, showing an insatisfactory health level. During the period of 30 years there was a reduction of the infant mortality rate to 46.2%. In the last 10 years a rising rate is observed, showing that the health level is worse and when we compare it with other countries the noticed difference is relevant. When we evaluate the proportional mortality (% of the total deaths of children with less than 1 year from 1940 to 1970 is remarkable on increasing of 16.3%. In the last 10 years it was higher in the west-center region (57.7% and South west (36.1%. When we compare the data of Brazil with the most developed State and City of Brazil (São Paulo we always see that these health indicators present itself as being higher in the country as a whole, refleting a worse health level. Among the principal conditionant reasons of the worsening health level in Brazil during the last ten years appears the economical one where the income distribution concentration increases, the real minimum wages fall by 20%. Consequently the worker's possibility of acquising wealth decreases. Adding to this, the increasing of the population without environmental health is growing.
Estimation of measurement variances
International Nuclear Information System (INIS)
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Validation of consistency of Mendelian sampling variance.
Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H
2018-03-01
Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
International Nuclear Information System (INIS)
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.
2011-01-01
Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic
Proportionality for Military Leaders
National Research Council Canada - National Science Library
Brown, Gary D
2000-01-01
.... Especially lacking is a realization that there are four distinct types of proportionality. In determining whether a particular resort to war is just, national leaders must consider the proportionality of the conflict (i.e...
Restricted Variance Interaction Effects
DEFF Research Database (Denmark)
Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.
2018-01-01
Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.Th; Verburg, T.G.
2001-01-01
The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.T.
1999-01-01
The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)
The Principle of Proportionality
DEFF Research Database (Denmark)
Bennedsen, Morten; Meisner Nielsen, Kasper
2005-01-01
Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....
Spectral Ambiguity of Allan Variance
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Multiwire proportional chamber development
Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.
1973-01-01
The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.
Restrictions and Proportionality
DEFF Research Database (Denmark)
Werlauff, Erik
2009-01-01
The article discusses three central aspects of the freedoms under European Community law, namely 1) the prohibition against restrictions as an important extension of the prohibition against discrimination, 2) a prohibition against exit restrictions which is just as important as the prohibition...... against host country restrictions, but which is often not recognised to the same extent by national law, and 3) the importance of also identifying and recognising an exit restriction, so that it is possible to achieve the required test of appropriateness and proportionality in relation to the rule...
Huntley, H E
1970-01-01
Using simple mathematical formulas, most as basic as Pythagoras's theorem and requiring only a very limited knowledge of mathematics, Professor Huntley explores the fascinating relationship between geometry and aesthetics. Poetry, patterns like Pascal's triangle, philosophy, psychology, music, and dozens of simple mathematical figures are enlisted to show that the ""divine proportion"" or ""golden ratio"" is a feature of geometry and analysis which awakes answering echoes in the human psyche. When we judge a work of art aesthetically satisfying, according to his formulation, we are making it c
A nuclear proportional counter
International Nuclear Information System (INIS)
1973-01-01
The invention relates to a nuclear proportional counter comprising in a bulb filled with a low-pressure gas, a wire forming an anode and a cathode, characterized in that said cathode is constituted by two plane plates parallel to each other and to the anode wire, and in that two branches of a circuit are connected to the anode wire end-portions, each branch comprising a pre-amplifier, a measuring circuit consisting of a differentiator-integrator-differentiator amplifier and a zero detector, one of the branches comprising an adjustable delay circuit, both branches jointly attacking a conversion circuit for converting the pulse duration into amplitudes said conversion circuit being followed by a multi-channel analyzer, contingently provided with a recorder [fr
Load proportional safety brake
Cacciola, M. J.
1979-01-01
This brake is a self-energizing mechanical friction brake and is intended for use in a rotary drive system. It incorporates a torque sensor which cuts power to the power unit on any overload condition. The brake is capable of driving against an opposing load or driving, paying-out, an aiding load in either direction of rotation. The brake also acts as a no-back device when torque is applied to the output shaft. The advantages of using this type of device are: (1) low frictional drag when driving; (2) smooth paying-out of an aiding load with no runaway danger; (3) energy absorption proportional to load; (4) no-back activates within a few degrees of output shaft rotation and resets automatically; and (5) built-in overload protection.
Macroeconomic Proportions and Corellations
Directory of Open Access Journals (Sweden)
Constantin Anghelache
2006-02-01
Full Text Available The work is focusing on the main proportions and correlations which are being set up between the major macroeconomic indicators. This is the general frame for the analysis of the relations between the Gross Domestic Product growth rate and the unemployment rate; the interaction between the inflation rate and the unemployment rate; the connection between the GDP growth rate and the inflation rate. Within the analysis being performed, a particular attention is paid to ï¿½the basic relationship of the economic growthï¿½ by emphasizing the possibilities as to a factorial analysis of the macroeconomic development, mainly as far as the Gross Domestic Product is concerned. At this point, the authors are introducing the mathematical relations, which are used for modeling the macroeconomic correlations, hence the strictness of the analysis being performed.
Macroeconomic Proportions and Corellations
Directory of Open Access Journals (Sweden)
Constantin Mitrut
2006-04-01
Full Text Available The work is focusing on the main proportions and correlations which are being set up between the major macroeconomic indicators. This is the general frame for the analysis of the relations between the Gross Domestic Product growth rate and the unemployment rate; the interaction between the inflation rate and the unemployment rate; the connection between the GDP growth rate and the inflation rate. Within the analysis being performed, a particular attention is paid to “the basic relationship of the economic growth” by emphasizing the possibilities as to a factorial analysis of the macroeconomic development, mainly as far as the Gross Domestic Product is concerned. At this point, the authors are introducing the mathematical relations, which are used for modeling the macroeconomic correlations, hence the strictness of the analysis being performed.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Phenotypic variance explained by local ancestry in admixed African Americans.
Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N
2015-01-01
We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.
The Genealogical Consequences of Fecundity Variance Polymorphism
Taylor, Jesse E.
2009-01-01
The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628
Means and Variances without Calculus
Kinney, John J.
2005-01-01
This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.
Schwenk, Nancy E.
1991-01-01
An overall perspective on trends in food consumption is presented. Nutrition awareness is at an all-time high; consumption is influenced by changes in disposable income, availability of convenience foods, smaller household size, and an increasing proportion of ethnic minorities in the population. (18 references) (LB)
Revision: Variance Inflation in Regression
Directory of Open Access Journals (Sweden)
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Modelling volatility by variance decomposition
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
Putative golden proportions as predictors of facial esthetics in adolescents.
Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C
2008-10-01
In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.
Proportioning of light weight concrete
DEFF Research Database (Denmark)
Palmus, Lars
1996-01-01
Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory......Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory...
Variance based OFDM frame synchronization
Directory of Open Access Journals (Sweden)
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Energy Technology Data Exchange (ETDEWEB)
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Li, Yang; Pirvu, Traian A
2011-01-01
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
Numerical simulation of variance of solar radiation and its influence on wheat growth
Zhang, Xuefen; Wang, Chunyi; Du, Zixuan; Zhai, Wei
2007-09-01
The growth of crops is directly related to solar radiation whose variances influence the photosynthesis of crops and the growth momentum thereof. This dissertation has Zhengzhou, which located in the Huanghuai Farmland Ecological System of China, as an example to analyze the rules of variances of total solar radiation, direct radiation and diffusive radiation. With the help of linear trend fitting, it is identified that total radiation (TR) drops as a whole at a rate of 1.6482J/m2. Such drop has been particularly apparent in recent years with a period of 7 to 16 years; diffusive radiation (DF) tends to increase at a rate of 15.149 J/m2 with a period of 20 years; direct radiation (DR) tends to drop at a rate of 15.843 J/m2 without apparent period. The total radiation has been on the decrease ever since 1980 during the growth period of wheat. Having modified relevant Parameter in the Carbon and Nitrogen Biogeochemistry in Agroecosystems Model (DNDC) model and simulated the influence of solar radiation variances on the development phase, leaf area index (LAI), grain weight, etc during the growth period of wheat, it is found that solar radiation is in positive proportion to LAI and grain weight (GRNWT) but not apparently related to development phase (DP). The change of total radiation delays the maximization of wheat LAI, reduces wheat LAI before winter but has no apparent effect in winter and decreases wheat LAI from jointing period to filling period; it has no apparent influence on grain formation at the early stage of grain formation, slows down the weight increase of grains during the filling period and accelerates the weight increase of grains at the end of filling period. Variance of radiations does not affect the DP of wheat much.
Proportional Symbol Mapping in R
Directory of Open Access Journals (Sweden)
Susumu Tanimura
2006-01-01
Full Text Available Visualization of spatial data on a map aids not only in data exploration but also in communication to impart spatial conception or ideas to others. Although recent carto-graphic functions in R are rapidly becoming richer, proportional symbol mapping, which is one of the common mapping approaches, has not been packaged thus far. Based on the theories of proportional symbol mapping developed in cartography, the authors developed some functions for proportional symbol mapping using R, including mathematical and perceptual scaling. An example of these functions demonstrated the new expressive power and options available in R, particularly for the visualization of conceptual point data.
Optical fusions and proportional syntheses
Albert-Vanel, Michel
2002-06-01
A tragic error is being made in the literature concerning matters of color when dealing with optical fusions. They are still considered to be of additive nature, whereas experience shows us somewhat different results. The goal of this presentation is to show that fusions are, in fact, of 'proportional' nature, tending to be additive or subtractive, depending on each individual case. Using the pointillist paintings done in the manner of Seurat, or the spinning discs experiment could highlight this intermediate sector of the proportional. So, let us try to examine more closely what occurs in fact, by reviewing additive, subtractive and proportional syntheses.
Simultaneous Monte Carlo zero-variance estimates of several correlated means
International Nuclear Information System (INIS)
Booth, T.E.
1998-01-01
Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown
Proportional counter end effects eliminator
International Nuclear Information System (INIS)
Meekins, J.F.
1976-01-01
An improved gas-filled proportional counter which includes a resistor network connected between the anode and cathode at the ends of the counter in order to eliminate ''end effects'' is described. 3 Claims, 2 Drawing Figures
Electronics for proportional drift tubes
International Nuclear Information System (INIS)
Fremont, G.; Friend, B.; Mess, K.H.; Schmidt-Parzefall, W.; Tarle, J.C.; Verweij, H.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Geske, K.; Riege, H.; Schuett, J.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Semenov, Y.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration)
1980-01-01
An electronic system for the read-out of a large number of proportional drift tubes (16,000) has been designed. This system measures deposited charge and drift-time of the charge of a particle traversing a proportional drift tube. A second event can be accepted during the read-out of the system. Up to 40 typical events can be collected and buffered before a data transfer to a computer is necessary. (orig.)
Analogical proportions: another logical view
Prade, Henri; Richard, Gilles
This paper investigates the logical formalization of a restricted form of analogical reasoning based on analogical proportions, i.e. statements of the form a is to b as c is to d. Starting from a naive set theoretic interpretation, we highlight the existence of two noticeable companion proportions: one states that a is to b the converse of what c is to d (reverse analogy), while the other called paralogical proportion expresses that what a and b have in common, c and d have it also. We identify the characteristic postulates of the three types of proportions and examine their consequences from an abstract viewpoint. We further study the properties of the set theoretic interpretation and of the Boolean logic interpretation, and we provide another light on the understanding of the role of permutations in the modeling of the three types of proportions. Finally, we address the use of these proportions as a basis for inference in a propositional setting, and relate it to more general schemes of analogical reasoning. The differences between analogy, reverse-analogy, and paralogy is still emphasized in a three-valued setting, which is also briefly presented.
Confidence Interval Approximation For Treatment Variance In ...
African Journals Online (AJOL)
In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
Computer simulation of gain fluctuations in proportional counters
International Nuclear Information System (INIS)
Demir, Nelgun; Tapan, . Ilhan
2004-01-01
A computer simulation code has been developed in order to examine the fluctuation in gas amplification in wire proportional counters which are common in detector applications in particle physics experiments. The magnitude of the variance in the gain dominates the statistical portion of the energy resolution. In order to compare simulation and experimental results, the gain and its variation has been calculated numerically for the well known Aleph Inner Tracking Detector geometry. The results show that the bias voltage has a strong influence on the variance in the gain. The simulation calculations are in good agreement with experimental results. (authors)
Development of multiwire proportional chambers
Charpak, G
1969-01-01
It has happened quite often in the history of science that theoreticians, confronted with some major difficulty, have successfully gone back thirty years to look at ideas that had then been thrown overboard. But it is rare that experimentalists go back thirty years to look again at equipment which had become out-dated. This is what Charpak and his colleagues did to emerge with the 'multiwire proportional chamber' which has several new features making it a very useful addition to the armoury of particle detectors. In the 1930s, ion-chambers, Geiger- Muller counters and proportional counters, were vital pieces of equipment in nuclear physics research. Other types of detectors have since largely replaced them but now the proportional counter, in new array, is making a comeback.
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Saving Money Using Proportional Reasoning
de la Cruz, Jessica A.; Garney, Sandra
2016-01-01
It is beneficial for students to discover intuitive strategies, as opposed to the teacher presenting strategies to them. Certain proportional reasoning tasks are more likely to elicit intuitive strategies than other tasks. The strategies that students are apt to use when approaching a task, as well as the likelihood of a student's success or…
Disease proportions attributable to environment
Directory of Open Access Journals (Sweden)
Vineis Paolo
2007-11-01
Full Text Available Abstract Population disease proportions attributable to various causal agents are popular as they present a simplified view of the contribution of each agent to the disease load. However they are only summary figures that may be easily misinterpreted or over-interpreted even when the causal link between an exposure and an effect is well established. This commentary discusses several issues surrounding the estimation of attributable proportions, particularly with reference to environmental causes of cancers, and critically examines two recently published papers. These issues encompass potential biases as well as the very definition of environment and of environmental agent. The latter aspect is not just a semantic question but carries implications for the focus of preventive actions, whether centred on the material and social environment or on single individuals.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
PEP quark search proportional chambers
Energy Technology Data Exchange (ETDEWEB)
Parker, S I; Harris, F; Karliner, I; Yount, D [Hawaii Univ., Honolulu (USA); Ely, R; Hamilton, R; Pun, T [California Univ., Berkeley (USA). Lawrence Berkeley Lab.; Guryn, W; Miller, D; Fries, R [Northwestern Univ., Evanston, IL (USA)
1981-04-01
Proportional chambers are used in the PEP Free Quark Search to identify and remove possible background sources such as particles traversing the edges of counters, to permit geometric corrections to the dE/dx and TOF information from the scintillator and Cerenkov counters, and to look for possible high cross section quarks. The present beam pipe has a thickness of 0.007 interaction lengths (lambdasub(i)) and is followed in both arms each with 45/sup 0/ <= theta <= 135/sup 0/, ..delta..phi=90/sup 0/ by 5 proportional chambers, each 0.0008 lambdasub(i) thick with 32 channels of pulse height readout, and by 3 thin scintillator planes, each 0.003 lambdasub(i) thick. Following this thin front end, each arm of the detector has 8 layers of scintillator (one with scintillating light pipes) interspersed with 4 proportional chambers and a layer of lucite Cerenkov counters. Both the calculated ion statistics and measurements using He-CH/sub 4/ gas in a test chamber indicate that the chamber efficiencies should be >98% for q=1/3. The Landau spread measured in the test was equal to that observed for normal q=1 traversals. One scintillator plane and thin chamber in each arm will have an extra set of ADC's with a wide gate bracketing the normal one so timing errors and tails of earlier pulses should not produce fake quarks.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Incisors’ proportions in smile esthetics
Alsulaimani, Fahad F; Batwa, Waeil
2013-01-01
Aims: To determine whether alteration of the maxillary central and lateral incisors’ length and width, respectively, would affect perceived smile esthetics and to validate the most esthetic length and width, respectively, for the central and lateral incisors. Materials and Methods: Photographic manipulation was undertaken to produce two sets of photographs, each set of four photographs showing the altered width of the lateral incisor and length of the central length. The eight produced photographs were assessed by laypeople, dentists and orthodontists. Results: Alteration in the incisors’ proportion affected the relative smile attractiveness for laypeople (n=124), dentists (n=115) and orthodontists (n=68); dentists and orthodontists did not accept lateral width reduction of more than 0.5 mm (P<0.01), which suggests that the lateral to central incisor width ratio ranges from 54% to 62%. However, laypeople did not accept lateral width reduction of more than 1 mm (P<0.01), widening the range to be from 48% to 62%. All groups had zero tolerance for changes in central crown length (P<0.01). Conclusion: All participants recognized that the central incisors’ length changes. For lateral incisors, laypeople were more tolerant than dentists and orthodontists. This suggests that changing incisors’ proportions affects the relative smile attractiveness. PMID:24987650
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.
Evolution of Genetic Variance during Adaptive Radiation.
Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel
2018-04-01
Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.
Constant Proportion Debt Obligations (CPDOs)
DEFF Research Database (Denmark)
Cont, Rama; Jessen, Cathrine
2012-01-01
be made arbitrarily small—and thus the credit rating arbitrarily high—by increasing leverage, but the ratings obtained strongly depend on assumptions on the credit environment (high spread or low spread). More importantly, CPDO loss distributions are found to exhibit a wide range of tail risk measures......Constant Proportion Debt Obligations (CPDOs) are structured credit derivatives that generate high coupon payments by dynamically leveraging a position in an underlying portfolio of investment-grade index default swaps. CPDO coupons and principal notes received high initial credit ratings from...... the major rating agencies, based on complex models for the joint transition of ratings and spreads for all names in the underlying portfolio. We propose a parsimonious model for analysing the performance of CPDO strategies using a top-down approach that captures the essential risk factors of the CPDO. Our...
Trend Analysis Using Microcomputers.
Berger, Carl F.
A trend analysis statistical package and additional programs for the Apple microcomputer are presented. They illustrate strategies of data analysis suitable to the graphics and processing capabilities of the microcomputer. The programs analyze data sets using examples of: (1) analysis of variance with multiple linear regression; (2) exponential…
Efficient Cardinality/Mean-Variance Portfolios
Brito, R. Pedro; Vicente, Luís Nunes
2014-01-01
International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Hydrograph variances over different timescales in hydropower production networks
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.
Position-sensitive proportional counter
International Nuclear Information System (INIS)
Kopp, M.K.
1980-01-01
A position-sensitive proportional counter circuit uses a conventional (low-resistance, metal-wire anode) counter for spatial resolution of an ionizing event along the anode, which functions as an RC line. A pair of preamplifiers at the anode ends act as stabilized active-capacitance loads, each comprising a series-feedback, low-noise amplifier and a unity-gain, shunt-feedback amplifier whose output is connected through a feedback capacitor to the series-feedback amplifier input. The stabilized capacitance loading of the anode allows distributed RC-line position encoding and subsequent time difference decoding by sensing the difference in rise times of pulses at the anode ends where the difference is primarily in response to the distributed capacitance along the anode. This allows the use of lower resistance wire anodes for spatial radiation detection which simplifies the counter construction of handling of the anodes, and stabilizes the anode resistivity at high count rates (>10 6 counts/sec). (author)
Replica approach to mean-variance portfolio optimization
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Variance components for body weight in Japanese quails (Coturnix japonica
Directory of Open Access Journals (Sweden)
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance estimation for generalized Cavalieri estimators
Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen
2011-01-01
The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Directory of Open Access Journals (Sweden)
Daheng Peng
2017-10-01
Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Daheng Peng; Fang Zhang
2017-01-01
In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Sample Size Calculation for Controlling False Discovery Proportion
Directory of Open Access Journals (Sweden)
Shulian Shang
2012-01-01
Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
A Mean variance analysis of arbitrage portfolios
Fang, Shuhong
2007-03-01
Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.
Dynamic Mean-Variance Asset Allocation
Basak, Suleyman; Chabakauri, Georgy
2009-01-01
Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...
Genetic variants influencing phenotypic variance heterogeneity.
Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa
2018-03-01
Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.
The Variance Composition of Firm Growth Rates
Directory of Open Access Journals (Sweden)
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Global Drought Proportional Economic Loss Risk Deciles
National Aeronautics and Space Administration — Global Drought Proportional Economic Loss Risk Deciles is a 2.5 minute grid of drought hazard economic loss as proportions of Gross Domestic Product (GDP) per...
Impossibility Theorem in Proportional Representation Problem
International Nuclear Information System (INIS)
Karpov, Alexander
2010-01-01
The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).
Cognitive and Metacognitive Aspects of Proportional Reasoning
Modestou, Modestina; Gagatsis, Athanasios
2010-01-01
In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…
Evaluating Middle Years Students' Proportional Reasoning
Hilton, Annette; Dole, Shelley; Hilton, Geoff; Goos, Merrilyn; O'Brien, Mia
2012-01-01
Proportional reasoning is a key aspect of numeracy that is not always developed naturally by students. Understanding the types of proportional reasoning that students apply to different problem types is a useful first step to identifying ways to support teachers and students to develop proportional reasoning in the classroom. This paper describes…
International Nuclear Information System (INIS)
Cermak, V.; Markvart, M.; Novy, P.; Vanka, M.
1989-01-01
The tests are briefly described or proportioning U 3 O 8 powder of a granulometric grain size range of 0-160 μm using a vertical screw, a horizontal dual screw and a vibration dispenser with a view to proportioning very fine U 3 O 8 powder fractions produced in the oxidation of UO 2 fuel pellets. In the tests, the evenness of proportioning was assessed by the percentage value of the proportioning rate spread measured at one-minute intervals at a proportioning rate of 1-3 kg/h. In feeding the U 3 O 3 in a flame fluorator, it is advantageous to monitor the continuity of the powder column being proportioned and to assess it radiometrically by the value of the proportioning rate spread at very short intervals (0.1 s). (author). 10 figs., 1 tab., 12 refs
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Variance in binary stellar population synthesis
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Estimating quadratic variation using realized variance
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models
DEFF Research Database (Denmark)
Lanne, Markku; Nyberg, Henri
We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...
Why do card issuers charge proportional fees?
Oz Shy; Zhu Wang
2008-01-01
This paper explains why payment card companies charge consumers and merchants fees which are proportional to the transaction values instead of charging a fixed per-transaction fee. Our theory shows that, even in the absence of any cost considerations, card companies earn much higher profit when they charge proportional fees. It is also shown that competition among merchants reduces card companies' gains from using proportional fees relative to a fixed per-transaction fee. Merchants are found ...
Estimation of noise-free variance to measure heterogeneity.
Directory of Open Access Journals (Sweden)
Tilo Winkler
Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.
Relating arithmetical techniques of proportion to geometry
DEFF Research Database (Denmark)
Wijayanti, Dyana
2015-01-01
The purpose of this study is to investigate how textbooks introduce and treat the theme of proportion in geometry (similarity) and arithmetic (ratio and proportion), and how these themes are linked to each other in the books. To pursue this aim, we use the anthropological theory of the didactic....... Considering 6 common Indonesian textbooks in use, we describe how proportion is explained and appears in examples and exercises, using an explicit reference model of the mathematical organizations of both themes. We also identify how the proportion themes of the geometry and arithmetic domains are linked. Our...
2010-07-01
...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...
Variance Risk Premia on Stocks and Bonds
DEFF Research Database (Denmark)
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...
Biological Variance in Agricultural Products. Theoretical Considerations
Tijskens, L.M.M.; Konopacki, P.
2003-01-01
The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Variance Swap Replication: Discrete or Continuous?
Directory of Open Access Journals (Sweden)
Fabien Le Floc’h
2018-02-01
Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.
Zero-intelligence realized variance estimation
Gatheral, J.; Oomen, R.C.A.
2010-01-01
Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
1998-01-01
mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...
International Nuclear Information System (INIS)
Wright, T.
1982-01-01
A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling
DEFF Research Database (Denmark)
Casas, Isabel; Mao, Xiuping; Veiga, Helena
This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...
Proportional Reasoning and the Visually Impaired
Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia
2012-01-01
Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…
Adaptive bayesian analysis for binomial proportions
CSIR Research Space (South Africa)
Das, Sonali
2008-10-01
Full Text Available of testing the proportion of some trait. For example, say, we are interested to infer about the effectiveness of a certain intervention teaching strategy, by comparing proportion of ‘proficient’ teachers, before and after an intervention. The number...
Mix Proportion Design of Asphalt Concrete
Wu, Xianhu; Gao, Lingling; Du, Shoujun
2017-12-01
Based on the gradation of AC and SMA, this paper designs a new type of anti slide mixture with two types of advantages. Chapter introduces the material selection, ratio of ore mixture ratio design calculation, and determine the optimal asphalt content test and proportioning design of asphalt concrete mix. This paper introduces the new technology of mix proportion.
Fringe biasing: A variance reduction technique for optically thick meshes
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Fringe biasing: A variance reduction technique for optically thick meshes
International Nuclear Information System (INIS)
Smedley-Stevenson, R. P.
2013-01-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Proportional gas scintillation detectors and their applications
International Nuclear Information System (INIS)
Petr, I.
1978-01-01
The principle is described of a gas proportional scintillation detector and its function. Dependence of Si(Li) and xenon proportional detectors energy resolution on the input window size is given. A typical design is shown of a xenon detector used for X-ray spetrometry at an energy of 277 eV to 5.898 keV and at a gas pressure of 98 to 270 kPa. Gas proportional scintillation detectors show considerable better energy resolution than common proportional counters and even better resolution than semiconductor Si(Li) detectors for low X radiation energies. For detection areas smaller than 25 mm 2 Si(Li) detectors show better resolution, especially for higher X radiation energies. For window areas 25 to 190 mm 2 both types of detectors are equal, for a window area exceeding 190 mm 2 the proportional scintillation detector has higher energy resolution. (B.S.)
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.
Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil
2011-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.
Icon arrays help younger children's proportional reasoning.
Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei
2018-06-01
We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.
Realized Variance and Market Microstructure Noise
DEFF Research Database (Denmark)
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
The Theory of Variances in Equilibrium Reconstruction
International Nuclear Information System (INIS)
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-01
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
Mean and variance evolutions of the hot and cold temperatures in Europe
Energy Technology Data Exchange (ETDEWEB)
Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)
2010-02-15
In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.
Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan
2013-01-01
Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.
Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.
Pressley, Joanna; Troyer, Todd W
2009-07-01
In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.
Count rate effect in proportional counters
International Nuclear Information System (INIS)
Bednarek, B.
1980-01-01
A critical evaluaton is presented of the actual state of investigations and explanations of the resolution and pulse height changes resulted in proportional counters from radiation intensity variations. (author)
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Proportion congruency effects: Instructions may be enough
Directory of Open Access Journals (Sweden)
Olga eEntel
2014-10-01
Full Text Available Learning takes time, namely, one needs to be exposed to contingency relations between stimulus dimensions in order to learn, whereas intentional control can be recruited through task demands. Therefore showing that control can be recruited as a function of experimental instructions alone, that is, adapting the processing according to the instructions before the exposure to the task, can be taken as evidence for existence of control recruitment in the absence of learning. This was done by manipulating the information given at the outset of the experiment. In the first experiment, we manipulated list-level congruency proportion. Half of the participants were informed that most of the stimuli would be congruent, whereas the other half were informed that most of the stimuli would be incongruent. This held true for the stimuli in the second part of each experiment. In the first part, however, the proportion of the two stimulus types was equal. A proportion congruent effect was found in both parts of the experiment, but it was larger in the second part. In our second experiment, we manipulated the proportion of the stimuli within participants by applying an item-specific design. This was done by presenting some color words most often in their congruent color, and other color words in incongruent colors. Participants were informed about the exact word-color pairings in advance. Similar to Experiment 1, this held true only for the second experimental part. In contrast to our first experiment, informing participants in advance did not result in an item-specific proportion effect, which was observed only in the second part. Thus our results support the hypothesis that instructions may be enough to trigger list-level control, yet learning does contribute to the proportion congruent effect under such conditions. The item-level proportion effect is apparently caused by learning or at least it is moderated by it.
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Proportional hazards models of infrastructure system recovery
International Nuclear Information System (INIS)
Barker, Kash; Baroud, Hiba
2014-01-01
As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set
The Origins of Scintillator Non-Proportionality
Moses, W. W.; Bizarri, G. A.; Williams, R. T.; Payne, S. A.; Vasil'ev, A. N.; Singh, J.; Li, Q.; Grim, J. Q.; Choong, W.-S.
2012-10-01
Recent years have seen significant advances in both theoretically understanding and mathematically modeling the underlying causes of scintillator non-proportionality. The core cause is that the interaction of radiation with matter invariably leads to a non-uniform ionization density in the scintillator, coupled with the fact that the light yield depends on the ionization density. The mechanisms that lead to the luminescence dependence on ionization density are incompletely understood, but several important features have been identified, notably Auger-like processes (where two carriers of excitation interact with each other, causing one to de-excite non-radiatively), the inability of excitation carriers to recombine (caused either by trapping or physical separation), and the carrier mobility. This paper reviews the present understanding of the fundamental origins of scintillator non-proportionality, specifically the various theories that have been used to explain non-proportionality.
Variance-based Salt Body Reconstruction
Ovcharenko, Oleg
2017-05-26
Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.
Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.
Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L
2014-11-01
Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.
Large-Scale Analysis of Art Proportions
DEFF Research Database (Denmark)
Jensen, Karl Kristoffer
2014-01-01
While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...
Is proportion burned severely related to daily area burned?
International Nuclear Information System (INIS)
Birch, Donovan S; Morgan, Penelope; Smith, Alistair M S; Kolden, Crystal A; Hudak, Andrew T
2014-01-01
The ecological effects of forest fires burning with high severity are long-lived and have the greatest impact on vegetation successional trajectories, as compared to low-to-moderate severity fires. The primary drivers of high severity fire are unclear, but it has been hypothesized that wind-driven, large fire-growth days play a significant role, particularly on large fires in forested ecosystems. Here, we examined the relative proportion of classified burn severity for individual daily areas burned that occurred during 42 large forest fires in central Idaho and western Montana from 2005 to 2007 and 2011. Using infrared perimeter data for wildfires with five or more consecutive days of mapped perimeters, we delineated 2697 individual daily areas burned from which we calculated the proportions of each of three burn severity classes (high, moderate, and low) using the differenced normalized burn ratio as mapped for large fires by the Monitoring Trends in Burn Severity project. We found that the proportion of high burn severity was weakly correlated (Kendall τ = 0.299) with size of daily area burned (DAB). Burn severity was highly variable, even for the largest (95th percentile) in DAB, suggesting that other variables than fire extent influence the ecological effects of fires. We suggest that these results do not support the prioritization of large runs during fire rehabilitation efforts, since the underlying assumption in this prioritization is a positive relationship between severity and area burned in a day. (letters)
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
Energy Technology Data Exchange (ETDEWEB)
Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2006-07-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
International Nuclear Information System (INIS)
Christoforou, S.; Hoogenboom, J. E.
2006-01-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
Genetic factors explain half of all variance in serum eosinophil cationic protein
DEFF Research Database (Denmark)
Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie
2014-01-01
with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....
DEFF Research Database (Denmark)
Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena
2017-01-01
Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...
Investigation of a multiwire proportional chamber
International Nuclear Information System (INIS)
Konijn, J.
1976-01-01
The article discusses some aspects of a prototype multiwire proportional chamber for electron detection located at IKO in Amsterdam, i.e. voltage, counting rates, noise and gas mixture (argon, ethylene bromide). The efficiency and performance of the chamber have been investigated and an error analysis is given
Author: IM Rautenbach PROPORTIONALITY AND THE LIMITATION ...
African Journals Online (AJOL)
10332324
public law, it is not clear to me that there are any differences at this more abstract ... Bills of Rights entrench "basic principles of rationality and proportionality – of ..... Sweet Europe of Rights 10-11; Van Dijk and Van Hoof Theory and Practice.
Obtaining a Proportional Allocation by Deleting Items
Dorn, B.; de Haan, R.; Schlotter, I.; Röthe, J.
2017-01-01
We consider the following control problem on fair allocation of indivisible goods. Given a set I of items and a set of agents, each having strict linear preference over the items, we ask for a minimum subset of the items whose deletion guarantees the existence of a proportional allocation in the
Proportional green time scheduling for traffic lights
P. Kovacs; Le, T. (Tung); R. Núñez Queija (Rudesindo); Vu, H. (Hai); N. Walton
2016-01-01
textabstractWe consider the decentralized scheduling of a large number of urban traffic lights. We investigate factors determining system performance, in particular, the length of the traffic light cycle and the proportion of green time allocated to each junction. We study the effect of the length
Triangular tube proportional wire chamber system
Energy Technology Data Exchange (ETDEWEB)
Badtke, D H; Bakken, J A; Barnett, B A; Blumenfeld, B J; Chien, C Y; Madansky, L; Matthews, J A.J.; Pevsner, A; Spangler, W J [Johns Hopkins Univ., Baltimore, MD (USA); Lee, K L [California Univ., Berkeley (USA). Lawrence Berkeley Lab.
1981-10-15
We report on the characteristics of the proportional tube chamber system which has been constructed for muon identification in the PEP-4 experiment at SLAC. The mechanical and electrical properties of the extruded aluminum triangular tubes allow these detectors to be used as crude drift chambers.
Commanding to 'Nudge' via the Proportionality Principle?
K.P. Purnhagen (Kai); E. van Kleef (Ellen)
2017-01-01
textabstractThis piece assesses whether nudging techniques can be argued to be a less restrictive but equally effective way to regulate diets in EU law. It has been argued that nudging techniques, due to their freedom-preserving nature, might influence the proportionality test in such a way that
Power Estimation in Multivariate Analysis of Variance
Directory of Open Access Journals (Sweden)
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Analysis of Variance in Statistical Image Processing
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Variance Risk Premia on Stocks and Bonds
DEFF Research Database (Denmark)
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co-movement...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...
The value of travel time variance
DEFF Research Database (Denmark)
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Hybrid biasing approaches for global variance reduction
International Nuclear Information System (INIS)
Wu, Zeyun; Abdel-Khalik, Hany S.
2013-01-01
A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.
Multistate cohort models with proportional transfer rates
DEFF Research Database (Denmark)
Schoen, Robert; Canudas-Romo, Vladimir
2006-01-01
of transfer rates. The two living state case and hierarchical multistate models with any number of living states are analyzed in detail. Applying our approach to 1997 U.S. fertility data, we find that observed rates of parity progression are roughly proportional over age. Our proportional transfer rate...... approach provides trajectories by parity state and facilitates analyses of the implications of changes in parity rate levels and patterns. More women complete childbearing at parity 2 than at any other parity, and parity 2 would be the modal parity in models with total fertility rates (TFRs) of 1.40 to 2......We present a new, broadly applicable approach to summarizing the behavior of a cohort as it moves through a variety of statuses (or states). The approach is based on the assumption that all rates of transfer maintain a constant ratio to one another over age. We present closed-form expressions...
Pulse triggering mechanism of air proportional counters
International Nuclear Information System (INIS)
Aoyama, T.; Mori, T.; Watanabe, T.
1983-01-01
This paper describes the pulse triggering mechanism of a cylindrical proportional counter filled with air at atmospheric pressure for the incidence of β-rays. Experimental results indicate that primary electrons created distantly from the anode wire by a β-ray are transformed into negative ions, which then detach electrons close to the anode wire and generate electron avalanches thus triggering pulses, while electrons created near the anode wire by a β-ray directly trigger a pulse. Since a negative ion pulse is triggered by a single electron detached from a negative ion, multiple pulses are generated by a large number of ions produced by the incidence of a single β-ray. It is therefore necessary not to count pulses triggered by negative ions but to count those by primary electrons alone when use is made of air proportional counters for the detection of β-rays. (orig.)
Very large area multiwire spectroscopic proportional counters
International Nuclear Information System (INIS)
Ubertini, P.; Bazzano, A.; Boccaccini, L.; Mastropietro, M.; La Padula, C.D.; Patriarca, R.; Polcaro, V.F.
1981-01-01
As a result of a five year development program, a final prototype of a Very Large Area Spectroscopic Proportional Counter (VLASPC), to be employed in space borne payloads, was produced at the Istituto di Astrofisica Spaziale, Frascati. The instrument is the last version of a new generation of Multiwire Spectroscopic Proportional Counters (MWSPC) succesfully employed in many balloon borne flights, devoted to hard X-ray astronomy. The sensitive area of this standard unit is 2700 cm 2 with an efficiency higher than 10% in the range 15-180 keV (80% at 60 keV). The low cost and weight make this new type of VLASPC competitive with Nal arrays, phoswich and GSPC detectors in terms of achievable scientific results. (orig.)
Very large area multiwire spectroscopic proportional counters
Energy Technology Data Exchange (ETDEWEB)
Ubertini, P.; Bazzano, A.; Boccaccini, L.; Mastropietro, M.; La Padula, C.D.; Patriarca, R.; Polcaro, V.F. (Istituto di Astrofisica Spaziale, Frascati (Italy))
1981-07-01
As a result of a five year development program, a final prototype of a Very Large Area Spectroscopic Proportional Counter (VLASPC), to be employed in space borne payloads, was produced at the Istituto di Astrofisica Spaziale, Frascati. The instrument is the last version of a new generation of Multiwire Spectroscopic Proportional Counters (MWSPC) successfully employed in many balloon borne flights, devoted to hard X-ray astronomy. The sensitive area of this standard unit is 2700 cm/sup 2/ with an efficiency higher than 10% in the range 15-180 keV (80% at 60 keV). The low cost and weight make this new type of VLASPC competitive with Nal arrays, phoswich and GSPC detectors in terms of achievable scientific results.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...
variance components and genetic parameters for live weight
African Journals Online (AJOL)
admin
Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.
Cylindrical geometry for proportional and drift chambers
International Nuclear Information System (INIS)
Sadoulet, B.
1975-06-01
For experiments performed around storage rings such as e + e - rings or the ISR pp rings, cylindrical wire chambers are very attractive. They surround the beam pipe completely without any dead region in the azimuth, and fit well with the geometry of events where particles are more or less spherically produced. Unfortunately, cylindrical proportional or drift chambers are difficult to make. Problems are discussed and two approaches to fabricating the cathodes are discussed. (WHK)
2 π gaseous flux proportional detector
International Nuclear Information System (INIS)
Guevara, E.A.; Costello, E.D.; Di Carlo, R.O.
1986-01-01
A counting system has been developed in order to measure carbon-14 samples obtained in the course of a study of a plasmapheresis treatment for diabetic children. The system is based on the use of a 2π gaseous flux proportional detector especially designed for the stated purpose. The detector is described and experiment results are given, determining the characteristic parameters which set up the working conditions. (Author) [es
General methods for analyzing bounded proportion data
Hossain, Abu
2017-01-01
This thesis introduces two general classes of models for analyzing proportion response variable when the response variable Y can take values between zero and one, inclusive of zero and/or one. The models are inflated GAMLSS model and generalized Tobit GAMLSS model. The inflated GAMLSS model extends the flexibility of beta inflated models by allowing the distribution on (0,1) of the continuous component of the dependent variable to come from any explicit or transformed (i.e. logit or truncated...
Commanding to 'Nudge' via the Proportionality Principle?
Purnhagen, Kai; van Kleef, Ellen
2017-01-01
textabstractThis piece assesses whether nudging techniques can be argued to be a less restrictive but equally effective way to regulate diets in EU law. It has been argued that nudging techniques, due to their freedom-preserving nature, might influence the proportionality test in such a way that authorities need to give preference to nudging techniques over content-related or information regulation. We will illustrate on the example of EU food law how behavioural sciences have first altered t...
Contingency proportion systematically influences contingency learning.
Forrin, Noah D; MacLeod, Colin M
2018-01-01
In the color-word contingency learning paradigm, each word appears more often in one color (high contingency) than in the other colors (low contingency). Shortly after beginning the task, color identification responses become faster on the high-contingency trials than on the low-contingency trials-the contingency learning effect. Across five groups, we varied the high-contingency proportion in 10% steps, from 80% to 40%. The size of the contingency learning effect was positively related to high-contingency proportion, with the effect disappearing when high contingency was reduced to 40%. At the two highest contingency proportions, the magnitude of the effect increased over trials, the pattern suggesting that there was an increasing cost for the low-contingency trials rather than an increasing benefit for the high-contingency trials. Overall, the results fit a modified version of Schmidt's (2013, Acta Psychologica, 142, 119-126) parallel episodic processing account in which prior trial instances are routinely retrieved from memory and influence current trial performance.
The Distribution of the Sample Minimum-Variance Frontier
Raymond Kan; Daniel R. Smith
2008-01-01
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
DEFF Research Database (Denmark)
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Estimation of the false discovery proportion with unknown dependence.
Fan, Jianqing; Han, Xu
2017-09-01
Large-scale multiple testing with correlated test statistics arises frequently in many scientific research. Incorporating correlation information in approximating false discovery proportion has attracted increasing attention in recent years. When the covariance matrix of test statistics is known, Fan, Han & Gu (2012) provided an accurate approximation of False Discovery Proportion (FDP) under arbitrary dependence structure and some sparsity assumption. However, the covariance matrix is often unknown in many applications and such dependence information has to be estimated before approximating FDP. The estimation accuracy can greatly affect FDP approximation. In the current paper, we aim to theoretically study the impact of unknown dependence on the testing procedure and establish a general framework such that FDP can be well approximated. The impacts of unknown dependence on approximating FDP are in the following two major aspects: through estimating eigenvalues/eigenvectors and through estimating marginal variances. To address the challenges in these two aspects, we firstly develop general requirements on estimates of eigenvalues and eigenvectors for a good approximation of FDP. We then give conditions on the structures of covariance matrices that satisfy such requirements. Such dependence structures include banded/sparse covariance matrices and (conditional) sparse precision matrices. Within this framework, we also consider a special example to illustrate our method where data are sampled from an approximate factor model, which encompasses most practical situations. We provide a good approximation of FDP via exploiting this specific dependence structure. The results are further generalized to the situation where the multivariate normality assumption is relaxed. Our results are demonstrated by simulation studies and some real data applications.
Designing an optimally proportional inorganic scintillator
Energy Technology Data Exchange (ETDEWEB)
Singh, Jai, E-mail: jai.singh@cdu.edu.au [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia); Koblov, Alexander [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia)
2012-09-01
The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.
Designing an optimally proportional inorganic scintillator
International Nuclear Information System (INIS)
Singh, Jai; Koblov, Alexander
2012-01-01
The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.
Rate dependent image distortions in proportional counters
International Nuclear Information System (INIS)
Trow, M.W.; Bento, A.C.; Smith, A.
1994-01-01
The positional linearity of imaging proportional counters is affected by the intensity distribution of the incident radiation. A mechanism for this effect is described, in which drifting positive ions in the gas produce a distorting electric field which perturbs the trajectories of the primary electrons. In certain cases, the phenomenon causes an apparent improvement of the position resolution. We demonstrate the effect in a detector filled with a xenon-argon-CO 2 mixture. The images obtained are compared with the results of a simulation. If quantitative predictions for a particular detector are required, accurate values of the absolute detector gain, ion mobility and electron drift velocity are needed. ((orig.))
Hydrogen high pressure proportional drift detector
International Nuclear Information System (INIS)
Arefiev, A.; Balaev, A.
1983-01-01
The design and operation performances of a proportional drift detector PDD are described. High sensitivity of the applied PAD makes it possible to detect the neutron-proton elastic scattering in the energy range of recoil protons as low as 1 keV. The PDD is filled with hydrogen up to the pressure at 40 bars. High purity of the gas is maintained by a continuously operating purification system. The detector has been operating for several years in a neutron beam at the North Area of the CERN SPS
Obesity: A Venusian story of Paleolithic proportions
Directory of Open Access Journals (Sweden)
Krishna G Seshadri
2012-01-01
Full Text Available Art through the ages has been a marker of societal trends and fashion. Obesity is proscribed by physicians and almost reviled by today′s society. While Venus (Aphrodite continues to be the role model for those to aspire to free themselves from the clutches of obesity, Paleolithic humans had a different view of the perfect female form. Whether the Venus of Willendorf was a fashion symbol will be never answered, but the fact is that she remains testimony to the fact that obesity has been with us for several millennia.
International Nuclear Information System (INIS)
Smith, M.; Jones, D.R.
1991-01-01
The goal of exploration is to find reserves that will earn an adequate rate of return on the capital invested. Neither exploration nor economics is an exact science. The authors must therefore explore in those trends (plays) that have the highest probability of achieving this goal. Trend analysis is a technique for organizing the available data to make these strategic exploration decisions objectively and is in conformance with their goals and risk attitudes. Trend analysis differs from resource estimation in its purpose. It seeks to determine the probability of economic success for an exploration program, not the ultimate results of the total industry effort. Thus the recent past is assumed to be the best estimate of the exploration probabilities for the near future. This information is combined with economic forecasts. The computer software tools necessary for trend analysis are (1) Information data base - requirements and sources. (2) Data conditioning program - assignment to trends, correction of errors, and conversion into usable form. (3) Statistical processing program - calculation of probability of success and discovery size probability distribution. (4) Analytical processing - Monte Carlo simulation to develop the probability distribution of the economic return/investment ratio for a trend. Limited capital (short-run) effects are analyzed using the Gambler's Ruin concept in the Monte Carlo simulation and by a short-cut method. Multiple trend analysis is concerned with comparing and ranking trends, allocating funds among acceptable trends, and characterizing program risk by using risk profiles. In summary, trend analysis is a reality check for long-range exploration planning
Viking-Age Sails: Form and Proportion
Bischoff, Vibeke
2017-04-01
Archaeological ship-finds have shed much light on the design and construction of vessels from the Viking Age. However, the exact proportions of their sails remain unknown due to the lack of fully preserved sails, or other definite indicators of their proportions. Key Viking-Age ship-finds from Scandinavia—the Oseberg Ship, the Gokstad Ship and Skuldelev 3—have all revealed traces of rigging. In all three finds, the keelson—with the mast position—is preserved, together with fastenings for the sheets and the tack, indicating the breadth of the sail. The sail area can then be estimated based on practical experience of how large a sail the specific ship can carry, in conjunction with hull form and displacement. This article presents reconstructions of the form and dimensions of rigging and sail based on the archaeological finds, evidence from iconographic and written sources, and ethnographic parallels with traditional Nordic boats. When these sources are analysed, not only do the similarities become apparent, but so too does the relative disparity between the archaeological record and the other sources. Preferential selection in terms of which source is given the greatest merit is therefore required, as it is not possible to afford them all equal value.
Amplifier Design for Proportional Ionization Chambers
Energy Technology Data Exchange (ETDEWEB)
Baker, W. H.
1950-08-24
This paper presents the requirements of a nuclear amplifier of short resolving time, designed to accept pulses of widely varying amplitudes. Data are given which show that a proportional ionization chamber loaded with a 1,000-ohm resistor develops pulses of 0.5 microsecond duration and several volts amplitude. Results indicate that seven basic requirements are imposed on the amplifier when counting soft beta and gamma radiation in the presence of alpha particles, without absorbers. It should, (1) have a fast recovery time, (2) have a relatively good low frequency response, (3) accept pulses of widely varying heights without developing spurious pulsed, (4) have a limiting output stage, (5) preserve the inherently short rise time of the chamber, (6) minimize pulse integration, and (7) have sufficient gain to detect the weak pulses well below the chamber voltage at which continuous discharge takes place. The results obtained with an amplifier which meets these requirements is described. A formula is derived which indicates that redesign of the proportional ionization chamber might eliminate the need for an amplifier. This may be possible if the radioactive particles are collimated parallel to the collecting electrode.
Divine proportions in attractive and nonattractive faces.
Pancherz, Hans; Knapp, Verena; Erbe, Christina; Heiss, Anja Melina
2010-01-01
To test Ricketts' 1982 hypothesis that facial beauty is measurable by comparing attractive and nonattractive faces of females and males with respect to the presence of the divine proportions. The analysis of frontal view facial photos of 90 cover models (50 females, 40 males) from famous fashion magazines and of 34 attractive (29 females, five males) and 34 nonattractive (13 females, 21 males) persons selected from a group of former orthodontic patients was carried out in this study. Based on Ricketts' method, five transverse and seven vertical facial reference distances were measured and compared with the corresponding calculated divine distances expressed in phi-relationships (f=1.618). Furthermore, transverse and vertical facial disproportion indices were created. For both the models and patients, all the reference distances varied largely from respective divine values. The average deviations ranged from 0.3% to 7.8% in the female groups of models and attractive patients with no difference between them. In the male groups of models and attractive patients, the average deviations ranged from 0.2% to 11.2%. When comparing attractive and nonattractive female, as well as male, patients, deviations from the divine values for all variables were larger in the nonattractive sample. Attractive individuals have facial proportions closer to the divine values than nonattractive ones. In accordance with the hypothesis of Ricketts, facial beauty is measurable to some degree. COPYRIGHT © 2009 BY QUINTESSENCE PUBLISHING CO, INC.
Kalman-predictive-proportional-integral-derivative (KPPID)
International Nuclear Information System (INIS)
Fluerasu, A.; Sutton, M.
2004-01-01
With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.
Proportional-Integral-Resonant AC Current Controller
Directory of Open Access Journals (Sweden)
STOJIC, D.
2017-02-01
Full Text Available In this paper an improved stationary-frame AC current controller based on the proportional-integral-resonant control action (PIR is proposed. Namely, the novel two-parameter PIR controller is applied in the stationary-frame AC current control, accompanied by the corresponding parameter-tuning procedure. In this way, the proportional-resonant (PR controller, common in the stationary-frame AC current control, is extended by the integral (I action in order to enable the AC current DC component tracking, and, also, to enable the DC disturbance compensation, caused by the voltage source inverter (VSI nonidealities and by nonlinear loads. The proposed controller parameter-tuning procedure is based on the three-phase back-EMF-type load, which corresponds to a wide range of AC power converter applications, such as AC motor drives, uninterruptible power supplies, and active filters. While the PIR controllers commonly have three parameters, the novel controller has two. Also, the provided parameter-tuning procedure needs only one parameter to be tuned in relation to the load and power converter model parameters, since the second controller parameter is directly derived from the required controller bandwidth value. The dynamic performance of the proposed controller is verified by means of simulation and experimental runs.
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
BCE selector valves and flow proportional sampler
International Nuclear Information System (INIS)
Rippy, G.L.
1994-01-01
This Acceptance Test Procedure (ATP) has been prepared to demonstrate that the Electrical/Instrumentation systems for the B-Plant Process Condensate Treatment Facility (BCE) function as required by project criteria. Tests will be run to: Verify the operation of the solenoid valve and associated limit switches installed for the BCE portion of W-007H; Operate the solenoid valve and verify the proper operation of the associated limit switches based on the position of the solenoid valve;and, Demonstrate the integrity of the Sample Failure Alarm Relay XFA-211BA-BCE-1, and Power Failure ALarm Relay JFA-211BA-BCE-1 located inside the Flow Proportional Sampler in Building 211 BA
Absolute calibration of TFTR helium proportional counters
International Nuclear Information System (INIS)
Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.
1995-06-01
The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments
CWRU multiwire proportional counter readout system
International Nuclear Information System (INIS)
Bevington, P.R.; Leskovec, R.A.
1977-01-01
An electronic system is described which translates pulses from individual wires of multiwire proportional counters into binary addresses indicating the location of the wires in the chambers. The system combines a fast (<100 ns) serial scan of an event buffer with parallel encoding to provide fast transfer of addresses (250 ns per hit). The buffer has provision for disabling the input less than 40 ns after detection of an event to suppress recording of multiple hits caused by individual events. The encoder can digitize the address of every hit encountered or just the first addresses of contiguous hits. The system includes a coincidence trigger for determining whether timing criteria have been satisfied between chambers and with external devices. Events which do not meet the coincidence criteria are typically reset within 400 ns. The addresses are transferred to a computer interface through CAMAC modules. Multiple buffering permits further data acquisition during CAMAC transfer cycles. (Auth.)
Fabrication of preamplifier for proportional counter
International Nuclear Information System (INIS)
Lotfi, Y.; Yazdanpanah, M.; Talebi, B.; Mohammadi, A.; Etaati, Gh.
2002-01-01
We have tried to describe techniques of preamplifier fabrication for proportional counter. At first electronic circuit of preamplifier has been analyzed by means of Or cad 9.1. Then we assembled the circuit. Thereafter essential and standard parameters of preamplifier has been measured and compared with foreign made one, according to IEEE standard method. (IEEE Std 301-1988) Specification for our preamplifier is: 1. Rise time of output plus: 25 nsec. 2. Fall time of output pulse: 50μ sec. 3. Charge sensitive: 46.3 mV/pc. 4. Average noise: 500 ion pair (rms) 5. Count R ate L imit: 9.14*10 10 Count/sec. 6. Resolution: %1.3 7. Spectrum of Bf3 detector to 300μ Ci Am-Be source for this preamplifier is the same as foreign one. On the Whole comparison of this preamplifier with the foreign one shows that their parameters similarity is about %95
Tables of Confidence Limits for Proportions
1990-09-01
0.972 180 49 0.319 0.332 0,357 175 165 0.964 0.969 0.976 ISO 50 0.325 0.338 0.363 175 166 0.969 0.973 0.980 180 51 0.331 0.344 0.368 175 167 0.973 0.977...0.528 180 18 0.135 0 145 0.164 180 19 0.141 0.151 0.171 ISO 80 0.495 0,508 0.534 347 UPPER CONFIDENCE LIMIT FOR PROPORTIONS CONFIDENCE LEVEL...500 409 0.8401 0.8459 0.8565 500 355 0.7364 0.7434 0.7564 500 356 0.7383 0.7453 0.7582 500 410 0.8420 0.8478 0 8583 500 357 0.7402 0.7472 0.7602 500
Proportional chamber with data analog output
International Nuclear Information System (INIS)
Popov, V.E.; Prokof'ev, A.N.
1977-01-01
A proportional multiwier chamber is described. The chamber makes it possible to determine angles at wich a pion strikes a polarized target. A delay line, made of 60-core flat cable is used for removing signals from the chamber. From the delay line, signals are amplified and successively injected into shapers and a time-to-amplitude converter. An amplitude of the time-to amplitude converter output signal unambiguously determines the coordinate of a point at which a particle strikes the chamber plane. There are also given circuits of amplifiers, which consist of a preamplifier with gain 30 and a main amplifier with adjustable gain. Data on testing the chamber with the 450 MeV pion beam is demonstrated. The chamber features an efficiency of about 98 per cent under load of 2x10 5 s -1
Proportional representation apportionment methods and their applications
Pukelsheim, Friedrich
2017-01-01
The book offers an in-depth study of the translation of vote counts into seat numbers in proportional representation systems – an approach guided by practical needs. It also provides plenty of empirical instances illustrating the results. It analyzes in detail the 2014 elections to the European Parliament in the 28 member states, as well as the 2009 and 2013 elections to the German Bundestag. This second edition is a complete revision and expanded version of the first edition published in 2014, and many empirical election results that serve as examples have been updated. Further, a final chapter has been added assembling biographical sketches and authoritative quotes from individuals who pioneered the development of apportionment methodology. The mathematical exposition and the interrelations with political science and constitutional jurisprudence make this an apt resource for interdisciplinary courses and seminars on electoral systems and apportionment methods.
Quality measurement by proportional counter with B
International Nuclear Information System (INIS)
Onizuka, Yoshihiko; Endo, Satoru; Tanaka, Kenichi
2005-01-01
The dosimetry of air and the tissue-equivalent phantom made of acryl are carried out by the tissue-equivalent proportional counter (TEPC) and TEPC with wall contained B, and both results were compared. The changes of quality with distance from the beam center are determined by the frequency mean renewal energy y F (y)and the dose mean renewal energy y D (y) as indicators of quality. Both y F (y)and y D (y) of tissue-equivalent phantom are larger than air, but very large change was not observed in all distance. The dose rate is determined by y D (y), the number of events and measurement time. Change of dose rate was larger than the change of quality. The maximum value of dose rate depended on γray and neutron beam showed at the point 2 cm away from the center. (S.Y.)
Proportional counter system for radiation measurement
Energy Technology Data Exchange (ETDEWEB)
Sugimoto, M; Okudera, S
1970-11-21
A gas such as Xe or Kr employed in counter tubes is charged into the counter tube of a gas-flow type proportional counter for radiation measurement and into a vessel having a volume larger than that of the counter tube. The vessel communicates with the counter tube to circulate the gas via a pump through both the vessel and tube during measurement. An organic film such as a polyester synthetic resin film is used for the window of the counter tube to measure X-rays in the long wavelength range. Accordingly, a wide range of X-rays can be measured including both long and short wavelengths ranges by utilizing only one counter tube, thus permitting the gases employed to be effectively used.
Count rate effect in proportional counters
International Nuclear Information System (INIS)
Bednarek, B.
1980-01-01
A new concept is presented explaining changes in spectrometric parameters of proportional counters which occur due to varying count rate. The basic feature of this concept is that the gas gain of the counter remains constant in a wide range of count rate and that the decrease in the pulse amplitude and the detorioration of the energy resolution observed are the results of changes in the shape of original current pulses generated in the active volume of the counter. In order to confirm the validity of this statement, measurements of the gas amplification factor have been made in a wide count rate range. It is shown that above a certain critical value the gas gain depends on both the operating voltage and the count rate. (author)
Regional sensitivity analysis using revised mean and variance ratio functions
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen
2014-01-01
The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure
Encephalocoele-- epidemiological variance in New Zealand.
Monteith, Stephen J; Heppner, Peter A; Law, Andrew J J
2005-06-01
Considerable variation in the epidemiology of encephalocoeles throughout the world has been described in previous studies. We analysed 46 cases of encephalocoele presenting to Auckland and Starship Children's Hospital over the last 25 years to determine if our experience differed from that seen in a typical Western population, and to determine if there was variation between the different racial groups within New Zealand. The overall incidence of encephalocoeles in the area serviced by the neurosurgical services of Auckland and Starship Children's Hospitals was 1 in 13,418 births. This rate is at the higher end of the incidence spectrum compared with previous series. Overall, New Zealand appears to demonstrate a typical Western distribution of encephalocoele location. In people of Pacific Island descent, both the rate of encephaloceles (1 per 8,873 births) and the percentage of sincipital lesions (44%) differed from the rest of the population. Additionally, a higher than expected proportion of sincipital encephalocoeles was seen in male babies (5:1 male to female ratio). In most other regards our population resembles that of western cohorts published in the literature.
Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen
2018-06-01
Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Bodaghi, M.; Damanpack, A.R.; Aghdam, M.M.; Shakeri, M.
2013-01-01
In this paper, a simple and robust phenomenological model for shape memory alloys (SMAs) is proposed to simulate main features of SMAs under uniaxial as well as biaxial combined axial–torsional proportional/non-proportional loadings. The constitutive model for polycrystalline SMAs is developed within the framework of continuum thermodynamics of irreversible processes. The model nominates the volume fractions of self-accommodated and oriented martensite as scalar internal variables and the preferred direction of oriented martensitic variants as directional internal variable. An algorithm is introduced to develop explicit relationships for the thermo-mechanical behavior of SMAs under uniaxial and biaxial combined axial–torsional proportional/non-proportional loading conditions and also thermal loading. It is shown that the model is able to simulate main aspects of SMAs including self-accommodation, martensitic transformation, orientation and reorientation of martensite, shape memory effect, ferro-elasticity and pseudo-elasticity. A description of the time-discrete counterpart of the proposed SMA model is presented. Experimental results of uniaxial tension and biaxial combined tension–torsion non-proportional tests are simulated and a good qualitative correlation between numerical and experimental responses is achieved. Due to simplicity and accuracy, the model is expected to be used in the future studies dealing with the analysis of SMA devices in which two stress components including one normal and one shear stress are dominant
Variance swap payoffs, risk premia and extreme market conditions
DEFF Research Database (Denmark)
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Towards a mathematical foundation of minimum-variance theory
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)
2002-08-30
The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Energy Technology Data Exchange (ETDEWEB)
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
International Nuclear Information System (INIS)
Ankirchner, Stefan; Dermoune, Azzouz
2011-01-01
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Discrete and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
Discrete time and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
Calibration of proportional counters in microdosimetry
International Nuclear Information System (INIS)
Varma, M.N.
1982-01-01
Many microdosimetric spectra for low LET as well as high LET radiations are measured using commercially available (similar to EG and G) Rossi proportional counters. This paper discusses the corrections to be applied to data when calibration of the counter is made using one type of radiation, and then the counter is used in a different radiation field. The principal correction factor is due to differences in W-value of the radiation used for calibration and the radiation for which microdosimetric measurements are made. Both propane and methane base tissue-equivalent (TE) gases are used in these counters. When calibrating the detectors, it is important to use the correct stopping power value for that gas. Deviations in y-bar/sub F/ and y-bar/sub D/ are calculated for 60 Co using different extrapolation procedures from 0.15 keV/μm to zero event size. These deviations can be as large as 30%. Advantages of reporting microdosimetric parameters such as y-bar/sub F/ and y-bar/sub D/ above a certain minimum cut-off are discussed
Environmental drivers of sapwood and heartwood proportions
Thurner, Martin; Beer, Christian
2017-04-01
Recent advances combining information on stem volume from remote sensing with allometric relationships derived from forest inventory databases have led to spatially continuous estimates of stem, branch, root and foliage biomass in northern boreal and temperate forests. However, a separation of stem biomass into sapwood and heartwood mass has remained unsolved, despite their important differences in biogeochemical function, for instance concerning their contribution to tree respiratory costs. Although relationships between sapwood cross-sectional area and supported leaf area are well established, less is known about relations between sapwood or heartwood mass and other traits (e.g. stem mass), since these biomass compartments are more difficult to measure in practice. Here we investigate the variability in sapwood and heartwood proportions and determining environmental factors. For this task we explore an available biomass and allometry database (BAAD) and study relative sapwood and heartwood area, volume, mass and density in dependence of tree species, age and climate. First, a theoretical framework on how to estimate sap- and heartwood mass from stem mass is developed. Subsequently, the underlying assumptions and relationships are explored with the help of the BAAD. The established relationships can be used to derive spatially continuous sapwood and heartwood mass estimates by applying them to remote sensing based stem volume products. This would be a fundamental step forward to a data-driven estimate of autotrophic respiration.
Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R
2014-08-19
Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Perceptions, perspectives, proportions: Radioactive material transport
International Nuclear Information System (INIS)
1985-01-01
Nearly a hundred years ago in 1893 - when railroads still monopolized land transport, the first set of international rules governing shipments of hazardous materials were issued to cover their movement by rail. Since then, more than a dozen international bodies, and scores of national regulatory agencies, have published regulations directed at the carriage of dangerous goods by road, sea, air, as well as rail. The regulatory network today covers virtually all kinds of substances and commodities that are used for beneficial purposes, but that under certain conditions are potentially harmful to people and the environment. 'The Problems Encountered by International Road Transport in Multimodal Transport Operations', by M. Marmy, paper presented at the 8th International Symposium on the Transport and Handling of Dangerous Goods by Sea and Associated Modes, Havana, Cuba, 1984. These include the chemical fertilizers farmers spread on their fields, the nuclear fuel now powering electricity plants in some two dozen countries, the drugs physicians use to diagnose and treat illnesses, and the fossil fuels, such as gasoline, routinely used in transport vehicles. All told today, about 21 different international labels are required to identify separate classes of dangerous goods among them, explosives, corrosives, and flammables. Another separate class radioactive materials is the specific subject of feature articles in this issue of the IAEA Bulletin. The evolving regulatory system reflects at once the growth in traffic of hazardous materials, essentially a post-World War II trend. Since the mid-1940s, for example, the transport of all dangerous goods just on the seas has grown 1000%. based on reports at a recent international conference. Overall, years ahead will see further increases
ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE
Abdurakhman, Abdurakhman
2008-01-01
Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
2007-01-01
We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...
Diagnostic checking in linear processes with infinit variance
Krämer, Walter; Runde, Ralf
1998-01-01
We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.
Evaluation of Mean and Variance Integrals without Integration
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Adjustment of heterogenous variances and a calving year effect in ...
African Journals Online (AJOL)
Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.
Direct encoding of orientation variance in the visual system.
Norman, Liam J; Heywood, Charles A; Kentridge, Robert W
2015-01-01
Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.
Beyond the Mean: Sensitivities of the Variance of Population Growth.
Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad
2013-03-01
Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.
Diaz, S Anaid; Viney, Mark
2014-06-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
42 CFR 456.522 - Content of request for variance.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...
29 CFR 1905.5 - Effect of variances.
2010-07-01
...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...
29 CFR 1904.38 - Variances from the recordkeeping rule.
2010-07-01
..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Charles R. Goeldner; Stacy Standley
1980-01-01
A brief historical overview of skiing is presented, followed by a review of factors such as energy, population trends, income, sex, occupation and attitudes which affect the future of skiing. A. C. Neilson's Sports Participation Surveys show that skiing is the second fastest growing sport in the country. Skiing Magazine's study indicates there are...
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Billing Trends. Internet access: Bandwidth becoming analogous to electric power. Only maximum capacity (load) is fixed; Charges based on usage (units). Leased line bandwidth: Billing analogous to phone calls. But bandwidth is variable.
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment
The Trend Odds Model for Ordinal Data‡
Capuano, Ana W.; Dawson, Jeffrey D.
2013-01-01
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520
The trend odds model for ordinal data.
Capuano, Ana W; Dawson, Jeffrey D
2013-06-15
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
DEFF Research Database (Denmark)
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...
International Nuclear Information System (INIS)
Anon.
1993-01-01
This section discusses the US energy supply and demand situation including projections for energy use, the clean coal industry (constraints of regulation on investment in new technologies, technology trends, and current pollution control efficiency), opportunities in clean coal technology (Phase 2 requirements of Title 4 of the Clean Air Act, scrubber demand for lime and limestone, and demand for low sulfur coal), and the international market of clean coal technologies
Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances
Deng, Wei Q; Asma, Senay; Paré, Guillaume
2014-01-01
Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533
Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)
2006-07-01
In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Determinations of dose mean of specific energy for conventional x-rays by variance-measurements
International Nuclear Information System (INIS)
Forsberg, B.; Jensen, M.; Lindborg, L.; Samuelson, G.
1978-05-01
The dose mean value (zeta) of specific energy of a single event distribution is related to the variance of a multiple event distribution in a simple way. It is thus possible to determine zeta from measurements in high dose rates through observations of the variations in the ionization current from for instance an ionization chamber, if other parameters contribute negligibly to the total variance. With this method is has earlier been possible to obtain results down to about 10 nm in a beam of Co60-γ rays, which is one order of magnitude smaller than the sizes obtainable with the traditional technique. This advantage together with the suggestion that zeta could be an important parameter in radiobiology make further studies of the applications of the technique motivated. So far only data from measurements in beams of a radioactive nuclide has been reported. This paper contains results from measurements in a highly stabilized X-ray beam. The preliminary analysis shows that the variance technique has given reasonable results for object sizes in the region of 0.08 μm to 20 μm (100 kV, 1.6 Al, HVL 0.14 mm Cu). The results were obtained with a proportional counter except for the larger object sizes, where an ionization chamber was used. The measurements were performed at dose rates between 1 Gy/h and 40 Gy/h. (author)
Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
Allowable variance set on left ventricular function parameter
International Nuclear Information System (INIS)
Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin
2010-01-01
Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)
Host nutrition alters the variance in parasite transmission potential.
Vale, Pedro F; Choisy, Marc; Little, Tom J
2013-04-23
The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.
Minimum variance Monte Carlo importance sampling with parametric dependence
International Nuclear Information System (INIS)
Ragheb, M.M.H.; Halton, J.; Maynard, C.W.
1981-01-01
An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de
Advanced methods of analysis variance on scenarios of nuclear prospective
International Nuclear Information System (INIS)
Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.
2011-01-01
Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Heritability, variance components and genetic advance of some ...
African Journals Online (AJOL)
Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.
Variance Function Partially Linear Single-Index Models1.
Lian, Heng; Liang, Hua; Carroll, Raymond J
2015-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.
Variance estimation in the analysis of microarray data
Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.
2009-01-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing
Röring, Johan
2017-01-01
Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...
ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER
SENGUPTA, Jati K.; PARK, Hyung S.
1994-01-01
The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...
Towards the ultimate variance-conserving convection scheme
International Nuclear Information System (INIS)
Os, J.J.A.M. van; Uittenbogaard, R.E.
2004-01-01
In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287
Problems of variance reduction in the simulation of random variables
International Nuclear Information System (INIS)
Lessi, O.
1987-01-01
The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced
Cumulative prospect theory and mean variance analysis. A rigorous comparison
Hens, Thorsten; Mayer, Janos
2012-01-01
We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
2008-12-01
slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
Giorla, J.
1985-10-01
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr
Mean-Variance Optimization in Markov Decision Processes
Mannor, Shie; Tsitsiklis, John N.
2011-01-01
We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''
International Nuclear Information System (INIS)
Nasseri, K.K.
1987-07-01
Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Approximate zero-variance Monte Carlo estimation of Markovian unreliability
International Nuclear Information System (INIS)
Delcoux, J.L.; Labeau, P.E.; Devooght, J.
1997-01-01
Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)
A versatile omnibus test for detecting mean and variance heterogeneity.
Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.
Variance-based sensitivity indices for models with dependent inputs
International Nuclear Information System (INIS)
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'
International Nuclear Information System (INIS)
Nasseri, K.K.
1987-01-01
Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined
International Nuclear Information System (INIS)
Kylloenen, J.; Lindborg, L.
2005-01-01
Full text: The determination of both the low- and high-LET components of ambient dose equivalent in mixed fields is possible with microdosimetric methods. With the multiple-event microdosimetric variance covariance method the sum of those components are directly obtained also in pulsed beams. However, if the value of each dose component is needed a more extended analysis is required. The use of a graphite walled proportional detector in combination with a tissue-equivalent proportional counter in combination with the variance covariance method was here investigated. MCNP simulations were carried out for relevant energies to investigate the photon and neutron responses of the two detectors. The combined graphite and TEPC system, the Sievert instrument, was used for measurements at IRSN, Cadarache, in the workplace calibration fields of CANEL+, SIGMA, a Cf-252 and a moderated Cf(D 2 O,Cd) radiation field. The response of the instrument in various monoenergetic neutron fields is also known from measurements at PTB. The instrument took part in the measurement campaigns in workplace fields in the nuclear industry organized within the EVIDOS contract. The results are analyzed and the method of using a graphite detector compared with alternative methods of analysis is discussed. (author)
SYSTEMATIC SAMPLING FOR NON - LINEAR TREND IN MILK YIELD DATA
Tanuj Kumar Pandey; Vinod Kumar
2014-01-01
The present paper utilizes systematic sampling procedures for milk yield data exhibiting some non-linear trends. The best fitted mathematical forms of non-linear trend present in the milk yield data are obtained and the expressions of average variances of the estimators of population mean under simple random, usual systematic and modified systematic sampling procedures have been derived for populations showing non-linear trend. A comparative study is made among the three sampli...
Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.
Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G
2016-04-01
It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
PROPORTIONS AND HUMAN SCALE IN DAMASCENE COURTYARD HOUSES
Directory of Open Access Journals (Sweden)
M. Salim Ferwati
2008-03-01
Full Text Available Interior designers, architects, landscape architects, and even urban designers, agree that environment, as a form of non-verbal communication means, has a symbolic dimension to it. As for its aesthetic dimension, it seems that beauty is related to a certain proportion, partially and as a whole. Suitable proportion leaves a good impression upon the beholders, especially when it matches human proportion. That in fact was the underlining belief of LeCorbusier, according to which he developed his Modular concept. The study searches for a modular, or proportion, system that governs the design of Damascene traditional house. By geometrical and mathematical examinations of 28 traditional houses, it was found that a certain proportional relationship existed; however, these proportional relationships were not fixed ones. The study relied on analyzing the Iwan elevation as well as the inner courtyard proportion in relation to the building area. Charts, diagrams and tables were produced to summarize the results.
Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.
Zapko-Willmes, Alexandra; Kandler, Christian
2018-01-01
The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.
Impact of Damping Uncertainty on SEA Model Response Variance
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Genetic and environmental variance in content dimensions of the MMPI.
Rose, R J
1988-08-01
To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.
A new variance stabilizing transformation for gene expression data analysis.
Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor
2013-12-01
In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.
Pricing perpetual American options under multiscale stochastic elasticity of variance
International Nuclear Information System (INIS)
Yoon, Ji-Hun
2015-01-01
Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk
Monte Carlo variance reduction approaches for non-Boltzmann tallies
International Nuclear Information System (INIS)
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Variance estimation for sensitivity analysis of poverty and inequality measures
Directory of Open Access Journals (Sweden)
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Studying Variance in the Galactic Ultra-compact Binary Population
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Variance of a product with application to uranium estimation
International Nuclear Information System (INIS)
Lowe, V.W.; Waterman, M.S.
1976-01-01
The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
Variance squeezing and entanglement of the XX central spin model
International Nuclear Information System (INIS)
El-Orany, Faisal A A; Abdalla, M Sebawe
2011-01-01
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Asymptotic variance of grey-scale surface area estimators
DEFF Research Database (Denmark)
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Variance squeezing and entanglement of the XX central spin model
Energy Technology Data Exchange (ETDEWEB)
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Pressure control valve using proportional electro-magnetic solenoid actuator
International Nuclear Information System (INIS)
Yun, So Nam; Ham, Young Bog; Park, Pyoung Won
2006-01-01
This paper presents an experimental characteristics of electro-hydraulic proportional pressure control valve. In this study, poppet and valve body which are assembled into the proportional solenoid were designed and manufactured. The constant force characteristics of proportional solenoid actuator in the control region should be independent of the plunger position in order to be used to control the valve position in the fluid flow control system. The stroke-force characteristics of the proportional solenoid actuator is determined by the shape (or parameters) of the control cone. In this paper, steady state and transient characteristics of the solenoid actuator for electro-hydraulic proportional valve are analyzed using finite element method and it is confirmed that the proportional solenoid actuator has a constant attraction force in the control region independently on the stroke position. The effects of the parameters such as control cone length, thickness and taper length are also discussed
Rijn, M.J. van; Schut, A.F.; Aulchenko, Y.S.; Deinum, J.; Sayed-Tabatabaei, F.A.; Yazdanpanah, M.; Isaacs, A.; Axenovich, T.I.; Zorkoltseva, I.V.; Zillikens, M.C.; Pols, H.A.; Witteman, J.C.; Oostra, B.A.; Duijn, C.M. van
2007-01-01
OBJECTIVE: To study the heritability of four blood pressure traits and the proportion of variance explained by four blood-pressure-related genes. METHODS: All participants are members of an extended pedigree from a Dutch genetically isolated population. Heritability and genetic correlations of
Development of extruded resistive plastic tubes for proportional chamber cathodes
International Nuclear Information System (INIS)
Kondo, K.
1982-01-01
Carbon mixed plastic tubes with resistivity of 10 3 approx. 10 4 Ωcm have been molded with an extrusion method and used for the d.c. cathode of a proportional counter and a multi-wire proportional chamber. The signal by gas multiplication was picked up from a strip r.f. cathode set outside the tube. The characteristics of the counter in the proportional and limited streamer modes have been studied
Multiwire proportional chamber for Moessbauer spectroscopy: development and results
International Nuclear Information System (INIS)
Costa, M.S. da.
1985-12-01
A new Multiwere proportional Chamber designed for Moessbauer Spectroscopy is presented. This detector allows transmission backscattering experiments using either photons or electrons. The Moessbauer data acquisition system, partially developed for this work is described. A simple method for determining the frontier between true proportional and semi-proportional regions of operation in gaseous detectors is proposed. The study of the tertiary gas mixture He-Ar-CH 4 leads to a straight forward way of energy calibration of the electron spectra. Moessbauer spectra using Fe-57 source are presented. In particular those obtained with backsattered electrons show the feasibility of depth selective analysis with gaseous proportional counters. (author) [pt
Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego
2015-03-01
All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ = 0.49) than DZ (rDZ = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)
2011-07-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Genetic variance components for residual feed intake and feed ...
African Journals Online (AJOL)
Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...
Cumulative Prospect Theory, Option Returns, and the Variance Premium
Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver
The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the
Gravity interpretation of dipping faults using the variance analysis method
International Nuclear Information System (INIS)
Essa, Khalid S
2013-01-01
A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)
Bounds for Tail Probabilities of the Sample Variance
Directory of Open Access Journals (Sweden)
Van Zuijlen M
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Robust estimation of the noise variance from background MR data
Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.
2006-01-01
In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum
Stable limits for sums of dependent infinite variance random variables
DEFF Research Database (Denmark)
Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas
2011-01-01
The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...
Computing the Expected Value and Variance of Geometric Measures
DEFF Research Database (Denmark)
Staals, Frank; Tsirogiannis, Constantinos
2017-01-01
distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...
Estimation of the additive and dominance variances in South African ...
African Journals Online (AJOL)
The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...
A note on minimum-variance theory and beyond
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
A Visual Model for the Variance and Standard Deviation
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Multidimensional adaptive testing with a minimum error-variance criterion
van der Linden, Willem J.
1997-01-01
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple
Asymptotics of variance of the lattice point count
Czech Academy of Sciences Publication Activity Database
Janáček, Jiří
2008-01-01
Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008
Vertical velocity variances and Reynold stresses at Brookhaven
DEFF Research Database (Denmark)
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
Mike
2013-03-09
Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...
An observation on the variance of a predicted response in ...
African Journals Online (AJOL)
... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity
The Threat of Common Method Variance Bias to Theory Building
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Variance in parametric images: direct estimation from parametric projections
International Nuclear Information System (INIS)
Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.
2000-01-01
Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)
40 CFR 268.44 - Variance from a treatment standard.
2010-07-01
... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...
Application of effective variance method for contamination monitor calibration
International Nuclear Information System (INIS)
Goncalez, O.L.; Freitas, I.S.M. de.
1990-01-01
In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)
The VIX, the Variance Premium, and Expected Returns
DEFF Research Database (Denmark)
Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo
2018-01-01
. These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...
Some asymptotic theory for variance function smoothing | Kibua ...
African Journals Online (AJOL)
Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...
Variance-optimal hedging for processes with stationary independent increments
DEFF Research Database (Denmark)
Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.
We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
A note on minimum-variance theory and beyond
International Nuclear Information System (INIS)
Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello
2004-01-01
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Molecular variance of the Tunisian almond germplasm assessed by ...
African Journals Online (AJOL)
The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...
Starting design for use in variance exchange algorithms | Iwundu ...
African Journals Online (AJOL)
A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...
Decomposition of variance in terms of conditional means
Directory of Open Access Journals (Sweden)
Alessandro Figà Talamanca
2013-05-01
Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..
A Hold-out method to correct PCA variance inflation
DEFF Research Database (Denmark)
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...
Heterogeneity of variance and its implications on dairy cattle breeding
African Journals Online (AJOL)
Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...
Effects of Diversification of Assets on Mean and Variance | Jayeola ...
African Journals Online (AJOL)
Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
On zero variance Monte Carlo path-stretching schemes
International Nuclear Information System (INIS)
Lux, I.
1983-01-01
A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to
The variance quadtree algorithm: use for spatial sampling design
Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.
2007-01-01
Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
Variance component and heritability estimates of early growth traits ...
African Journals Online (AJOL)
as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.
Variances in consumers prices of selected food Items among ...
African Journals Online (AJOL)
The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...
Age Differences in the Variance of Personality Characteristics
Czech Academy of Sciences Publication Activity Database
Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.
2016-01-01
Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016
Variance in exposed perturbations impairs retention of visuomotor adaptation.
Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel
2017-11-01
Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of
Variance risk premia in CO_2 markets: A political perspective
International Nuclear Information System (INIS)
Reckling, Dennis
2016-01-01
The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.
X-ray proportional counter for the Viking Lander
International Nuclear Information System (INIS)
Glesius, F.L.; Kroon, J.C.; Castro, A.J.; Clark, B.C.
1978-01-01
A set of four sealed proportional counters with optimized energy response is employed in the X-ray fluorescence spectrometer units aboard the two Viking Landers. The instruments have provided quantitative elemental analyses of soil samples taken from the Martian surface. This paper discusses the design and development of these miniature proportional counters, and describes their performance on Mars
Putative golden proportions as predictors of facial esthetics in adolescents.
Kiekens, R.M.A.; Kuijpers-Jagtman, A.M.; Hof, M.A. van 't; Hof, B.E. van 't; Maltha, J.C.
2008-01-01
INTRODUCTION: In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. METHODS: Seventy-six adult laypeople
The principle of proportionality and European contract law
Cauffman, C.; Rutgers, J.; Sirena, P.
2015-01-01
The paper investigates the role of the principle of proportionality within contract law, in balancing the rights and obligations of the contracting parties. It illustrates that the principle of proportionality is one of the general principles which govern contractual relations, and as such it is an
The Improved Estimation of Ratio of Two Population Proportions
Solanki, Ramkrishna S.; Singh, Housila P.
2016-01-01
In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…
Attention Modulation by Proportion Congruency: The Asymmetrical List Shifting Effect
Abrahamse, Elger L.; Duthoo, Wout; Notebaert, Wim; Risko, Evan F.
2013-01-01
Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention…
16 CFR 240.9 - Proportionally equal terms.
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Proportionally equal terms. 240.9 Section 240.9 Commercial Practices FEDERAL TRADE COMMISSION GUIDES AND TRADE PRACTICE RULES GUIDES FOR ADVERTISING ALLOWANCES AND OTHER MERCHANDISING PAYMENTS AND SERVICES § 240.9 Proportionally equal terms. (a...
Output Power Control of Wind Turbine Generator by Pitch Angle Control using Minimum Variance Control
Senjyu, Tomonobu; Sakamoto, Ryosei; Urasaki, Naomitsu; Higa, Hiroki; Uezato, Katsumi; Funabashi, Toshihisa
In recent years, there have been problems such as exhaustion of fossil fuels, e. g., coal and oil, and environmental pollution resulting from consumption. Effective utilization of renewable energies such as wind energy is expected instead of the fossil fuel. Wind energy is not constant and windmill output is proportional to the cube of wind speed, which cause the generated power of wind turbine generators (WTGs) to fluctuate. In order to reduce fluctuating components, there is a method to control pitch angle of blades of the windmill. In this paper, output power leveling of wind turbine generator by pitch angle control using an adaptive control is proposed. A self-tuning regulator is used in adaptive control. The control input is determined by the minimum variance control. It is possible to compensate control input to alleviate generating power fluctuation with using proposed controller. The simulation results with using actual detailed model for wind power system show effectiveness of the proposed controller.
Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.
2017-01-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.
Variance components estimation for farrowing traits of three purebred pigs in Korea
Directory of Open Access Journals (Sweden)
Bryan Irvine Lopez
2017-09-01
Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.
DEFF Research Database (Denmark)
Bochsler, Daniel
2014-01-01
Mixed-member proportional systems (MMP) are a family of electoral systems which combine district-based elections with a proportional seat allocation. Positive vote transfer systems belong to this family. This article explains why they might be better than their siblings, and examines under which ...
Courtiol, Alexandre; Rickard, Ian J; Lummaa, Virpi; Prentice, Andrew M; Fulford, Anthony J C; Stearns, Stephen C
2013-05-20
Recent human history is marked by demographic transitions characterized by declines in mortality and fertility. By influencing the variance in those fitness components, demographic transitions can affect selection on other traits. Parallel to changes in selection triggered by demography per se, relationships between fitness and anthropometric traits are also expected to change due to modification of the environment. Here we explore for the first time these two main evolutionary consequences of demographic transitions using a unique data set containing survival, fertility, and anthropometric data for thousands of women in rural Gambia from 1956-2010. We show how the demographic transition influenced directional selection on height and body mass index (BMI). We observed a change in selection for both traits mediated by variation in fertility: selection initially favored short females with high BMI values but shifted across the demographic transition to favor tall females with low BMI values. We demonstrate that these differences resulted both from changes in fitness variance that shape the strength of selection and from shifts in selective pressures triggered by environmental changes. These results suggest that demographic and environmental trends encountered by current human populations worldwide are likely to modify, but not stop, natural selection in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
Female scarcity reduces women's marital ages and increases variance in men's marital ages.
Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom
2010-08-05
When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.
Female Scarcity Reduces Women's Marital Ages and Increases Variance in Men's Marital Ages
Directory of Open Access Journals (Sweden)
Daniel J. Kruger
2010-07-01
Full Text Available When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.
TRENDS OF ROMANIAN BANKING NETWORK DEVELOPMENT
Directory of Open Access Journals (Sweden)
Nicoleta Georgeta PANAIT
2015-07-01
Full Text Available Since 2009, two trends occurred in the banking world: downsizing of personnel, on the one hand and the reduction of retail units held, on the other hand. The first trend was most notable in countries with unstable or weak economy. The effects were seen immediately. Reducing of the operating costs and more applied of the territorial structure and staff was a decision that credit institutions in Romania took relatively late. Worldwide banks began a restructuring otherwise dictated by this time not so economic crises new market trends - increasing access to the internet for the population and use of the internet in a growing proportion of internet banking
Robust estimation of the proportion of treatment effect explained by surrogate marker information.
Parast, Layla; McDermott, Mary M; Tian, Lu
2016-05-10
In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Directory of Open Access Journals (Sweden)
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Compounding approach for univariate time series with nonstationary variances
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Response variance in functional maps: neural darwinism revisited.
Directory of Open Access Journals (Sweden)
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Variance reduction methods applied to deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
The proportionate value of proportionality in palliative sedation.
Berger, Jeffrey T
2014-01-01
Proportionality, as it pertains to palliative sedation, is the notion that sedation should be induced at the lowest degree effective for symptom control, so that the patient's consciousness may be preserved. The pursuit of proportionality in palliative sedation is a widely accepted imperative advocated in position statements and guidelines on this treatment. The priority assigned to the pursuit of proportionality, and the extent to which it is relevant for patients who qualify for palliative sedation, have been overstated. Copyright 2014 The Journal of Clinical Ethics. All rights reserved.
An integrated photosensor readout for gas proportional scintillation counters
International Nuclear Information System (INIS)
Lopes, J.A.M.; Santos, J.M.F. dos; Conde, C.A.N.
1996-01-01
A xenon gas proportional scintillation counter has been instrumented with a novel photosensor that replaces the photomultiplier tube normally used to detect the VUV secondary scintillation light. In this implementation, the collection grid of a planar gas proportional scintillation counter also functions as a multiwire proportional chamber to amplify and detect the photoelectrons emitted by a reflective CsI photocathode in direct contact with the xenon gas. This integrated concept combines greater simplicity, compactness, and ruggedness (no optical window is used) with low power consumption. An energy resolution of 12% was obtained for 59.6 keV x-rays
Spatial analysis based on variance of moving window averages
Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J
2006-01-01
A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...
Efficient Scores, Variance Decompositions and Monte Carlo Swindles.
1984-08-28
to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
The mean and variance of phylogenetic diversity under rarefaction
Nipperess, David A.; Matsen, Frederick A.
2013-01-01
Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...
On mean reward variance in semi-Markov processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2005-01-01
Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005
Mean-Variance Analysis in a Multiperiod Setting
Frauendorfer, Karl; Siede, Heiko
1997-01-01
Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...
Analytic solution to variance optimization with no short positions
Kondor, Imre; Papp, Gábor; Caccioli, Fabio
2017-12-01
We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \
minimum variance estimation of yield parameters of rubber tree
African Journals Online (AJOL)
2013-03-01
Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Improved estimation of the variance in Monte Carlo criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)
2008-07-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
A general transform for variance reduction in Monte Carlo simulations
International Nuclear Information System (INIS)
Becker, T.L.; Larsen, E.W.
2011-01-01
This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)
A reduced feedback proportional fair multiuser scheduling scheme
Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim
2011-01-01
. A slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we propose a novel proportional fair multiuser switched
DC motor proportional control system for orthotic devices
Blaise, H. T.; Allen, J. R.
1972-01-01
Multi-channel proportional control system for operation of dc motors for use with externally-powered orthotic arm braces is described. Components of circuitry and principles of operation are described. Schematic diagram of control circuit is provided.
Proportional feedback control of laminar flow over a hemisphere
Energy Technology Data Exchange (ETDEWEB)
Lee, Jung Il [Dept. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of); Son, Dong Gun [Severe Accident and PHWR Safety Research Division, Korea Atomic Energy Research Institute (KAERI), Daejeon (Korea, Republic of)
2016-08-15
In the present study, we perform a proportional feedback control of laminar flow over a hemisphere at Re = 300 to reduce its lift fluctuations by attenuating the strength of the vortex shedding. As a control input, blowing/suction is distributed on the surface of hemisphere before the separation, and its strength is linearly proportional to the transverse velocity at a sensing location in the centerline of the wake. The sensing location is determined based on a correlation function between the lift force and the time derivative of sensing velocity. The optimal proportional gains for the proportional control are obtained for the sensing locations considered. The present control successfully attenuates the velocity fluctuations at the sensing location and three dimensional vertical structures in the wake, resulting in the reduction of lift fluctuations of hemisphere.
Operability test report for 211BA flow proportional sampler
International Nuclear Information System (INIS)
Weissenfels, R.D.
1995-01-01
This operability report will verify that the 211-BA flow proportional sampler functions as intended by design. The sampler was installed by Project W-007H and is part of BAT/AKART for the BCE liquid effluent stream
Trends in wilderness recreation use characteristics
Alan E. Watson; David N. Cole; Joseph W. Roggenbuck
1995-01-01
Recent studies at the Leopold Institute have included analysis of use and user trends at the Boundary Waters Canoe Area Wilderness, Desolation Wilderness, Shining Rock Wilderness, the Bob Marshall Wilderness Complex, Great Smoky Mountains National Park and Eagle Cap Wilderness. Some sociodemographics, like age, education, and the proportion of female visitors, have...
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
A parsimonious model for the proportional control valve
Elmer, KF; Gentle, CR
2001-01-01
A generic non-linear dynamic model of a direct-acting electrohydraulic proportional solenoid valve is presented. The valve consists of two subsystems-s-a spool assembly and one or two unidirectional proportional solenoids. These two subsystems are modelled separately. The solenoid is modelled as a non-linear resistor-inductor combination, with inductance parameters that change with current. An innovative modelling method has been used to represent these components. The spool assembly is model...
Reduction of degraded events in miniaturized proportional counters
Energy Technology Data Exchange (ETDEWEB)
Plaga, R.; Kirsten, T. (Max Planck Inst. fuer Kernphysik, Heidelberg (Germany))
1991-11-15
A method to reduce the number of degraded events in miniaturized proportional counters is described. A shaping of the outer cathode leads to a more uniform gas gain along the counter axis. The method is useful in situations in which the total number of decay events is very low. The effects leading to degraded events are studied theoretically and experimentally. The usefulness of the method is demonstrated by using it for the proportional counter of the GALLEX solar neutrino experiment. (orig.).
Horecká, Ivana
2015-01-01
This thesis deals with online marketing trends. Its main goal is to define the latest online marketing trends, create a website with the free online marketing trends, and analyse their effectiveness. The theoretical part brings a thorough description of the latest online marketing trends. Moreover, it provides an insight into the latest trends in the website development. The chosen online marketing trends defined in the theoretical part are subsequently applied on a newly created website. All...
Oliveira, Ana R S; Cohnstaedt, Lee W; Strathe, Erin; Hernández, Luciana Etcheverry; McVey, D Scott; Piaggio, José; Cernicchiaro, Natalia
2017-09-07
Japanese encephalitis (JE) is a zoonosis in Southeast Asia vectored by mosquitoes infected with the Japanese encephalitis virus (JEV). Japanese encephalitis is considered an emerging exotic infectious disease with potential for introduction in currently JEV-free countries. Pigs and ardeid birds are reservoir hosts and play a major role on the transmission dynamics of the disease. The objective of the study was to quantitatively summarize the proportion of JEV infection in vectors and vertebrate hosts from data pertaining to observational studies obtained in a systematic review of the literature on vector and host competence for JEV, using meta-analyses. Data gathered in this study pertained to three outcomes: proportion of JEV infection in vectors, proportion of JEV infection in vertebrate hosts, and minimum infection rate (MIR) in vectors. Random-effects subgroup meta-analysis models were fitted by species (mosquito or vertebrate host species) to estimate pooled summary measures, as well as to compute the variance between studies. Meta-regression models were fitted to assess the association between different predictors and the outcomes of interest and to identify sources of heterogeneity among studies. Predictors included in all models were mosquito/vertebrate host species, diagnostic methods, mosquito capture methods, season, country/region, age category, and number of mosquitos per pool. Mosquito species, diagnostic method, country, and capture method represented important sources of heterogeneity associated with the proportion of JEV infection; host species and region were considered sources of heterogeneity associated with the proportion of JEV infection in hosts; and diagnostic and mosquito capture methods were deemed important contributors of heterogeneity for the MIR outcome. Our findings provide reference pooled summary estimates of vector competence for JEV for some mosquito species, as well as of sources of variability for these outcomes. Moreover, this
On the noise variance of a digital mammography system
International Nuclear Information System (INIS)
Burgess, Arthur
2004-01-01
A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel
Aging trends -- the Philippines.
Biddlecom, A E; Domingo, L J
1996-03-01
This report presents a description of the trends in growth of the elderly population in the Philippines and their health, disability, education, work status, income, and family support. The proportion of elderly in the Philippines is much smaller than in other Southeast Asian countries, such as Singapore and Malaysia. The elderly population aged over 65 years increased from 2.7% of total population in 1990 to 3.6% in 1990. The elderly are expected to comprise 7.7% of total population in 2025. The proportion of elderly is small due to the high fertility rate. Life expectancy averages 63.5 years. The aged dependency ratio will double from 5.5 elderly per 100 persons aged 15-64 years in 1990 to 10.5/100 in 2025. A 1984 ASEAN survey found that only 11% of elderly rated their health as bad. The 1990 Census reveals that 3.9% were disabled elderly. Most were deaf, blind, or orthopedically impaired. 16% of elderly in the ASEAN survey reported not seeing a doctor even when they needed to. 54% reported that a doctor was not visited due to the great expense. In 1980, 67% of men and 76% of women aged over 60 years had less than a primary education. The proportion with a secondary education in 2020 is expected to be about 33% for men and 33% for women. 66.5% of men and 28.5% of women aged over 60 years were in the formal labor force in 1990. Women were less likely to receive cash income from current jobs or pensions. 65% of earnings from older rural people was income from agricultural production. 60% of income among urban elderly was from children, and 23% was from pensions. Family support is provided to the elderly in the form of coresidence. In 1988, 68% of elderly aged over 60 years lived with at least one child. Retirement or nursing homes are uncommon. The Philippines Constitution states that families have a duty to care for elderly members.
Against proportional shortfall as a priority-setting principle.
Altmann, Samuel
2018-05-01
As the demand for healthcare rises, so does the need for priority setting in healthcare. In this paper, I consider a prominent priority-setting principle: proportional shortfall. My purpose is to argue that proportional shortfall, as a principle, should not be adopted. My key criticism is that proportional shortfall fails to consider past health.Proportional shortfall is justified as it supposedly balances concern for prospective health while still accounting for lifetime health, even though past health is deemed irrelevant. Accounting for this lifetime perspective means that the principle may indirectly consider past health by accounting for how far an individual is from achieving a complete, healthy life. I argue that proportional shortfall does not account for this lifetime perspective as it fails to incorporate the fair innings argument as originally claimed, undermining its purported justification.I go on to demonstrate that the case for ignoring past health is weak, and argue that past health is at least sometimes relevant for priority-setting decisions. Specifically, when an individual's past health has a direct impact on current or future health, and when one individual has enjoyed significantly more healthy life years than another.Finally, I demonstrate that by ignoring past illnesses, even those entirely unrelated to their current illness, proportional shortfall can lead to instances of double jeopardy, a highly problematic implication. These arguments give us reason to reject proportional shortfall. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The Golden Ratio of Gait Harmony: Repetitive Proportions of Repetitive Gait Phases
Directory of Open Access Journals (Sweden)
Marco Iosa
2013-01-01
Full Text Available In nature, many physical and biological systems have structures showing harmonic properties. Some of them were found related to the irrational number known as the golden ratio that has important symmetric and harmonic properties. In this study, the spatiotemporal gait parameters of 25 healthy subjects were analyzed using a stereophotogrammetric system with 25 retroreflective markers located on their skin. The proportions of gait phases were compared with , the value of which is about 1.6180. The ratio between the entire gait cycle and stance phase resulted in 1.620 ± 0.058, that between stance and the swing phase was 1.629 ± 0.173, and that between swing and the double support phase was 1.684 ± 0.357. All these ratios did not differ significantly from each other (, , repeated measure analysis of variance or from (, resp., t-tests. The repetitive gait phases of physiological walking were found in turn in repetitive proportions with each other, revealing an intrinsic harmonic structure. Harmony could be the key for facilitating the control of repetitive walking. Harmony is a powerful unifying factor between seemingly disparate fields of nature, including human gait.
Bruner, Emiliano; Pereira-Pedro, Ana Sofia; Chen, Xu; Rilling, James K
2017-05-01
Recent analyses have suggested that the size and proportions of the precuneus are remarkably variable among adult humans, representing a major source of geometrical difference in midsagittal brain morphology. The same area also represents the main midsagittal brain difference between humans and chimpanzees, being more expanded in our species. Enlargement of the upper parietal surface is a specific feature of Homo sapiens, when compared with other fossil hominids, suggesting the involvement of these cortical areas in recent modern human evolution. Here, we provide a survey on midsagittal brain morphology by investigating whether precuneus size represents the largest component of variance within a larger and racially diverse sample of 265 adult humans. Additionally, we investigate the relationship between precuneus shape variation and folding patterns. Precuneus proportions are confirmed to be a major source of human brain variation even when racial variability is considered. Larger precuneus size is associated with additional precuneal gyri, generally in its anterior district. Spatial variation is most pronounced in the dorsal areas, with no apparent differences between hemispheres, between sexes, or among different racial groups. These dorsal areas integrate somatic and visual information together with the lateral elements of the parietal cortex, representing a crucial node for self-centered mental imagery. The histological basis and functional significance of this intra-specific variation in the upper precuneus remains to be evaluated. Copyright © 2017 Elsevier GmbH. All rights reserved.
Directory of Open Access Journals (Sweden)
Helio Yochihiro Fuchigami
2014-08-01
Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.
An Empirical Temperature Variance Source Model in Heated Jets
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
The principle of proportionality revisited: interpretations and applications.
Hermerén, Göran
2012-11-01
The principle of proportionality is used in many different contexts. Some of these uses and contexts are first briefly indicated. This paper focusses on the use of this principle as a moral principle. I argue that under certain conditions the principle of proportionality is helpful as a guide in decision-making. But it needs to be clarified and to be used with some flexibility as a context-dependent principle. Several interpretations of the principle are distinguished, using three conditions as a starting point: importance of objective, relevance of means, and most favourable option. The principle is then tested against an example, which suggests that a fourth condition, focusing on non-excessiveness, needs to be added. I will distinguish between three main interpretations of the principle, some primarily with uses in research ethics, others with uses in other areas of bioethics, for instance in comparisons of therapeutic means and ends. The relations between the principle of proportionality and the precautionary principle are explored in the following section. It is concluded that the principles are different and may even clash. In the next section the principle of proportionality is applied to some medical examples drawn from research ethics and bioethics. In concluding, the status of the principle of proportionality as a moral principle is discussed. What has been achieved so far and what remains to be done is finally summarized.
Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging
Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-01-01
One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...
A Note on the Kinks at the Mean Variance Frontier
Vörös, J.; Kriens, J.; Strijbosch, L.W.G.
1997-01-01
In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.
Variance reduction techniques in the simulation of Markov processes
International Nuclear Information System (INIS)
Lessi, O.
1987-01-01
We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space
A guide to SPSS for analysis of variance
Levine, Gustav
2013-01-01
This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce
Diffusion-Based Trajectory Observers with Variance Constraints
DEFF Research Database (Denmark)
Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo
Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
Trends of tuberculosis prevalence and treatment outcome in an ...
African Journals Online (AJOL)
The annual number of all TB cases showed a rising trend from 914 cases in the year 2000 to 1684 in 2009; but the proportion of new sputum smear (ss+) pulmonary tuberculosis (PTB) cases declined (Trend X2 = 7.37, P = 0.007). The average number of extra-pulmonary TB cases increased fourfold from 2000-2004 to ...
TechTrends 2010-2015: A Content Analysis
Stauffer, Eric
2017-01-01
This study is a content analysis of articles published within the journal "TechTrends" from 2000 to 2015. The study reveals that the publication "TechTrends" has increased the overall number of peer reviewed original papers over the last 6 years. The author describes the proportion of these original papers per volume and…
Trends in childhood injury mortality in South African population ...
African Journals Online (AJOL)
Trends in major causes of injury mortality and the proportion of total deaths attributable to injuries trom 1968·to 1985 tor white, coloured and Asian children < 15 years in the RSA were examined. There were 937 injury deaths in 1968 and 853 in 1985 but no clear trends in overall mortality rates were observed. There were ...
A reduced feedback proportional fair multiuser scheduling scheme
Shaqfeh, Mohammad
2011-12-01
Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed and ordered scheduling mechanism. A slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we propose a novel proportional fair multiuser switched-diversity scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the per-user feedback thresholds. We demonstrate by numerical examples that our reduced feedback proportional fair scheduler operates within 0.3 bits/sec/Hz from the achievable rates by the conventional full feedback proportional fair scheduler in Rayleigh fading conditions. © 2011 IEEE.
Prospective Teachers Proportional Reasoning and Presumption of Student Work
Directory of Open Access Journals (Sweden)
Mujiyem Sapti
2015-08-01
Full Text Available This study aimed to describe the proportional reasoning of prospective teachers and their predictions about students' answers. Subjects were 4 prospective teacher 7th semester Department of Mathematics Education, Muhammadiyah University of Purworejo. Proportional reasoning task used to obtain research data. Subjects were asked to explain their reasoning and write predictions of student completion. Data was taken on October 15th, 2014. Interviews were conducted after the subjects completed the task and recorded with audio media. The research data were subject written work and interview transcripts. Data were analyzed using qualitative analysis techniques. In solving the proportional reasoning task, subjects using the cross product. However, they understand the meaning of the cross product. Subject also could predict students' reasoning on the matter.
Proton-recoil proportional counter tests at TREAT
International Nuclear Information System (INIS)
Fink, C.L.; Eichholz, J.J.; Burrows, D.R.; DeVolpi, A.
1979-01-01
A methane filled proton-recoil proportional counter will be used as a fission neutron detector in the fast-neutron hodoscope. To provide meaningful fuel-motion information the proportional counter should have: a linear response over a wide range of reactor powers background ratio (the number of high energy neutrons detected must be maximized relative to low energy neutrons, and gamma ray sensitivity must be kept small); and a detector efficiency for fission neutrons above 1 MeV of approximately 1%. In addition, it is desirable that the detector and the associated amplifier/discriminator be capable of operating at counting rates in excess of 500 kHz. This paper reports on tests that were conducted on several proportional counters at the TREAT reactor
Multiaxial low cycle fatigue life under non-proportional loading
International Nuclear Information System (INIS)
Itoh, Takamoto; Sakane, Masao; Ohsuga, Kazuki
2013-01-01
A simple and clear method of evaluating stress and strain ranges under non-proportional multiaxial loading where principal directions of stress and strain are changed during a cycle is needed for assessing multiaxial fatigue. This paper proposes a simple method of determining the principal stress and strain ranges and the severity of non-proportional loading with defining the rotation angles of the maximum principal stress and strain in a three dimensional stress and strain space. This study also discusses properties of multiaxial low cycle fatigue lives for various materials fatigued under non-proportional loadings and shows an applicability of a parameter proposed by author for multiaxial low cycle fatigue life evaluation
Flow proportional sampling of low level liquid effluent
International Nuclear Information System (INIS)
Colley, D.; Jenkins, R.
1989-01-01
A flow proportional sampler for use on low level radioactive liquid effluent has been developed for installation on all CEGB nuclear power stations. The sampler, operates by drawing effluent continuously from the main effluent pipeline, through a sampler loop and returning it to the pipeline. The effluent in this loop is sampled by taking small, frequent aliquots using a linear acting shuttle valve. The frequency of operation of this valve is controlled by a flowmeter installed in the effluent line; sampling rate being directly proportional to effluent flowrate. (author)
The proportional odds cumulative incidence model for competing risks
DEFF Research Database (Denmark)
Eriksson, Frank; Li, Jianing; Scheike, Thomas
2015-01-01
We suggest an estimator for the proportional odds cumulative incidence model for competing risks data. The key advantage of this model is that the regression parameters have the simple and useful odds ratio interpretation. The model has been considered by many authors, but it is rarely used...... in practice due to the lack of reliable estimation procedures. We suggest such procedures and show that their performance improve considerably on existing methods. We also suggest a goodness-of-fit test for the proportional odds assumption. We derive the large sample properties and provide estimators...
Neutron dosimetry using proportional counters with tissue equivalent walls
International Nuclear Information System (INIS)
Kerviller, H. de
1965-01-01
The author reminds the calculation method of the neutron absorbed dose in a material and deduce of it the conditions what this material have to fill to be equivalent to biological tissues. Various proportional counters are mode with walls in new tissue equivalent material and filled with various gases. The multiplication factor and neutron energy response of these counters are investigated and compared with those obtained with ethylene lined polyethylene counters. The conditions of working of such proportional counters for neutron dosimetry in energy range 10 -2 to 15 MeV are specified. (author) [fr
Gas scintillation proportional counters for x-ray synchrotron applications
International Nuclear Information System (INIS)
Smith, A.; Bavdaz, M.
1992-01-01
Gas scintillation proportional counters (GSPCs) as x-ray detectors provide some advantages and disadvantages compared with proportional counters. In this paper the various configurations of xenon filled GSPC are described including both imaging and nonimaging devices. It is intended that this work be used to configure a GSPC for a particular application and predict its general performance characteristics. The general principles of operation are described and the performance characteristics are then separately considered. A high performance, imaging, driftless GSPC is described in which a single intermediate window is used between the PMT and gas cell
Avalanche localization and its effects in proportional counters
International Nuclear Information System (INIS)
Fischer, J.; Okuno, H.; Walenta, A.H.
1977-11-01
Avalanche development around the anode wire in a gas proportional counter is investigated. In the region of proportional gas amplification, the avalanche is found to be well localized on one side of the anode wire, where the electrons arrive along the field lines from the point of primary ionization. Induced signals on electrodes surrounding the anode wire are used to measure the azimuthal position of the avalanche on the anode wire. Practical applications of the phenomena such as left-right assignment in drift chambers and measurement of the angular direction of the primary ionization electrons drifting towards the anode wire are discussed
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Variance of indoor radon concentration: Major influencing factors
Energy Technology Data Exchange (ETDEWEB)
Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.
Variance Component Selection With Applications to Microbiome Taxonomic Data
Directory of Open Access Journals (Sweden)
Jing Zhai
2018-03-01
Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
Hidden temporal order unveiled in stock market volatility variance
Directory of Open Access Journals (Sweden)
Y. Shapira
2011-06-01
Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Waste Isolation Pilot Plant no-migration variance petition
International Nuclear Information System (INIS)
1990-01-01
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989
MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE
Directory of Open Access Journals (Sweden)
I GEDE ERY NISCAHYANA
2016-08-01
Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution. The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of BNLI stock, 0% of SMDM stock, 1% of SMGR stock.
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-09-21
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
International Nuclear Information System (INIS)
Lee, Tae Hee; Kim, Ho Sung
2010-01-01
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels
PET image reconstruction: mean, variance, and optimal minimax criterion
International Nuclear Information System (INIS)
Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing
2015-01-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)
Impact of intrauterine growth retardation and body proportionality on fetal and neonatal outcome.
Kramer, M S; Olivier, M; McLean, F H; Willis, D M; Usher, R H
1990-11-01
Previous prognostic studies of infants with intrauterine growth retardation (IUGR) have not adequately considered the heterogeneity of IUGR in terms of cause, severity, and body proportionality and have been prone to misclassification of IUGR because of errors in estimation of gestational age. Based on a cohort of 8719 infants with early-ultrasound-validated gestational ages and indexes of body proportionality standardized for birth weight, the consequences of severity and cause-specific IUGR and proportionality for fetal and neonatal morbidity and mortality were assessed. With progressive severity of IUGR, there were significant (all P less than .001) linear trends for increasing risks of stillbirth, fetal distress (abnormal electronic fetal heart tracings)O during parturition, neonatal hypoglycemia (minimum plasma glucose less than 40 mg/dL), hypocalcemia (minimum Ca less than 7 mg/dL), polycythemia (maximum capillary hemoglobin greater than or equal to 21 g/dL), severe depression at birth (manual ventilation greater than 3 minutes), 1-minute and 5-minute Apgar scores less than or equal to 6, 1-minute Apgar score less than or equal to 3, and in-hospital death. These trends persisted for the more common outcomes even after restriction to term (37 to 42 weeks) births. There was no convincing evidence that outcome among infants with a given degree of growth retardation varied as a function of cause of that growth retardation. Among infants with IUGR, increased length-for-weight had significant crude associations with hypoglycemia and polycythemia, but these associations disappeared after adjustment for severity of growth retardation and gestational age.(ABSTRACT TRUNCATED AT 250 WORDS)
Insensitivity of proportional fairness in critically loaded bandwidth sharing networks
Vlasiou, M.; Zhang, J.; Zwart, B.
2014-01-01
Proportional fairness is a popular service allocation mechanism to describe and analyze the performance of data networks at flow level. Recently, several authors have shown that the invariant distribution of such networks admits a product form distribution under critical loading. Assuming
Principles of operation of multiwire proportional and drift chambers
International Nuclear Information System (INIS)
Sauli, F.
1987-01-01
The first multiwire proportional chamber, in its modern conception, was constructed and operated in the years 1967-68. It was soon recognized that the main properties of a multiwire proportional chamber, i.e. very good time resolution, good position accuracy and self-triggered operation, are very attractive for the use of the new device in high-energy physics experiments. Today, most fast detectors contain a large number of proportional chambers, and their use has spread to many different fields of applied research, such as X-ray and heavy ion astronomy, nuclear medicine, and protein crystallography. In many respects, however, multiwire proportional chambers are still experimental devices, requiring continuous attention for good operation and sometimes reacting in unexpected ways to a change in the environmental conditions. Furthermore, in the fabrication and operation of a chamber people seem to use a mixture of competence, technical skill and magic rites, of the kind ''I do not know why I'm doing this but somebody told me to do so''. In these notes the authors illustrate the basic phenomena underlying the behaviour of a gas detector, with the hope that the reader will not only better understand the reasons for some irrational-seeming preferences (such as, for example, in the choice of the gas mixture), but will also be able better to design detectors for his specific experimental needs
Proportion of patients in the Uganda rheumatic heart disease ...
African Journals Online (AJOL)
Proportion of patients in the Uganda rheumatic heart disease registry with advanced ... of Cardiology guidelines on the management of valvular heart disease. ... disease that require surgical treatment yet they cannot access this therapy due to ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.
Digital signal processing for 3He proportional counters
International Nuclear Information System (INIS)
Takahashi, Hiroyuki; Kawarabayashi, Jun; Kurahashi, Tomohiko; Iguchi, Tetsuo; Nakazawa, Masaharu
1994-01-01
Numerical analysis of individual pulses from 3 He proportional counters has been performed. A parametric approach has been used for the identification of a charge particle track direction. Using area parameters, a clear separation of events was observed for the wall effect on a triton and a proton, respectively. ((orig.))
Are Explicit Apologies Proportional to the Offenses They Address?
Heritage, John; Raymond, Chase Wesley
2016-01-01
We consider here Goffman's proposal of proportionality between virtual offenses and remedial actions, based on the examination of 102 cases of explicit apologies. To this end, we offer a typology of the primary apology formats within the dataset, together with a broad categorization of the types of virtual offenses to which these apologies are…
Radiation loading effect proportional chamber on the performances
International Nuclear Information System (INIS)
Alekseev, T.D.; Kalinina, N.A.; Karpukhin, V.V.; Kruglov, V.V.; Khazins, D.M.
1980-01-01
The effect of a space charge which appears under the effect of radiation loading on counting characteristics of a proportional chamber, is experimentally investigated. Calculations are made which take into account the effect of a space charge of positive ions formed in the chamber. The investigations have been carried out on the test board which consists of a one-coordinate proportional chamber, a telescope of two scintillation counters and a collimated 90 Sr β-source. The proportional chamber has the 160x160 mm dimensions. The signal wires with the 50 μm diameter are located with the step of s=10 mm. High-voltage planes are coiled with a wire with the 100 μm diameter and a 2 mm step. The distance between high-voltage planes are 18 mm. The chamber is blown through with a gaseous mixture, its composition is 57% Ar+38% CH 4 +5% (OCH 3 ) 2 CH 2 . When carrying out measurements in wide ranges, the density of radiation loading and the amplifier threshold are changed. The experimental results show a considerable effect of radiation loading and the value of amplifier threshold on the value of a counting characteristic. This should be taken into account when estimating the performance of a proportional chamber according to board testing using radioactive sources, as conditions for investigations are usually different from those of a physical experiment on an accelerator
Contribution of working memory in the parity and proportional judgments
Szymanik, J.K.; Zajenkowski, M.
2011-01-01
This paper presents experimental evidence on the differences in a sentence-picture verification task under additional memory load between parity and proportional quantifiers. We asked subjects to memorize strings of four or six digits, then to decide whether a quantified sentence was true for a
Amplification and discrimination of signals from proportional multiwire chambers
International Nuclear Information System (INIS)
Papadopoulos, L.; Bosio, C.; Cordelli, M.
1976-03-01
A circuit is described which detects signals from proportional multiwire chambers. The threshold is about 3μ A and the time jitter of the pulse obtained is better than 2.5 ns. The circuit has one negative input and two complementary outputs. The realized module includes 8 channels with common trigger level control and was built as NIM standard unit
Attentional Control and the Relatedness Proportion Effect in Semantic Priming
Hutchison, Keith A.
2007-01-01
In 2 experiments, participants completed both an attentional control battery (OSPAN, antisaccade, and Stroop tasks) and a modified semantic priming task. The priming task measured relatedness proportion (RP) effects within subjects, with the color of the prime indicating the probability that the to-be-named target would be related. In Experiment…
Transposition of Knowledge: Encountering Proportionality in an Algebra Task
Lundberg, Anna. L. V.; Kilhamn, Cecilia
2018-01-01
This article reports on an analysis of the process in which "knowledge to be taught" was transposed into "knowledge actually taught," concerning a task including proportional relationships in an algebra setting in a grade 6 classroom. We identified affordances and constraints of the task by describing the mathematical…
Methods of calculus for neutron spectrometry in proportional counters
International Nuclear Information System (INIS)
Butragueno, J.L.; Blazquez, J.B.; Barrado, J.M.
1976-01-01
Response functions for cylindrical proportional counters with hidrogenated gases have been determined, taking in account only wall effect, by means of two independent calculus methods. One of them is a Montecarlo application and the other one analytical at all. Results of both methods have been compared. (author) [es
Methods of calculus for neutron spectrometry in proportional counters
International Nuclear Information System (INIS)
Butragueno Casado, J.L.; Blazquez Martinez, J.B.; Barrado Menendez, J.M.
1976-01-01
Response functions for cylindrical proportional counters with hydrogenated gases have been determined, taking in account only wall effect, by means of two independent calculus methods. One of them is a Monte Carlo application and the other one analytica at all. Results of both methods have been compared. (Author)
Pulse-shape discrimination in IAEA tritium proportional counters
International Nuclear Information System (INIS)
Florkowski, T.
1981-01-01
Two systems of pulse-shape discrimination (PSD) for the reduction of background in low-level proportional counters were tested. A tentative conclusion is drawn that both PSD systems, although they decrease slightly the meson background, do not bring improvement in the analytical accuracy
Development of proportional counters using photosensitive gases and liquids
International Nuclear Information System (INIS)
Anderson, D.F.
1984-10-01
An introduction to the history and to the principle of operation of wire chambers using photosensitive gases and liquids is presented. Their use as light sensors coupled to Gas Scintillation Proportional Counters and BaF 2 , as well as their use in Cherenkov Ring imaging, is discussed in some detail. 42 references, 21 figures
Position sensitive proportional counters as focal plane detectors
International Nuclear Information System (INIS)
Ford, J.L.C. Jr.
1979-01-01
The rise time and charge division techniques for position decoding with RC-line proportional counters are reviewed. The advantages that these detectors offer as focal plane counters for nuclear spectroscopy performed with magnetic spectrographs are discussed. The theory of operation of proportional counters as position sensing devices is summarized, as well as practical aspects affecting their application. Factors limiting the position and energy resolutions obtainable with a focal plane proportional counter are evaluated and measured position and energy loss values are presented for comparison. Detector systems capable of the multiparameter measurements required for particle identification, background suppression and ray-tracing are described in order to illustrate the wide applicability of proportional counters within complex focal plane systems. Examples of the use of these counters other than with magnetic spectrographs are given in order to demonstrate their usefulness in not only nuclear physics but also in fields such as solid state physics, biology, and medicine. The influence of the new focal plane detector systems on future magnetic spectrograph designs is discussed. (Auth.)
Nature and proportion of total injuries at the Stellenbosch Rugby ...
African Journals Online (AJOL)
Objective. The purpose of this study was to compare the nature and proportion of total injuries occurring at Stellenbosch Rugby Football Club in Stellenbosch, South Africa, between the years 1973 - 1975 and 2003 - 2005. Design. Retrospective, descriptive study. Main outcome measures. Injured rugby players from the ...
Flexible geometry hodoscope using proportional chamber cathode read-out
International Nuclear Information System (INIS)
Aubret, C.; Bellefon, A. de; Benoit, P.; Brunet, J.M.; Tristram, G.
1978-01-01
The construction of a cathode read-out proportional chamber, used as a low mass hodoscope is described. Results on efficiency, time resolution and space resolution are shown. The associative logic, which permits the use of the chamber as a coplanarity chamber is briefly presented
Proportionality in the New German Insurance Contract Act 2008
H. Heiss (Helmut)
2013-01-01
markdownabstract__Abstract__ In 2008, the German legislature enacted a completely revised Insurance Contract Act, in which a new rule of proportionality replaced the former all-or-nothing principle for questions of liability. This article outlines the reasons for this shift and the impact of the
A proportional counter for efficient backscatter Moessbauer effect spectroscopy
International Nuclear Information System (INIS)
Pawlowski, Z.; Marzec, J.; Cudny, W.; Holnicka, J.; Walentek, J.
1979-01-01
The authors present a novel gas-tight proportional counter with flat beryllium windows for backscatter Moessbauer spectroscopy. The krypton-filled counter has a geometry that approaches 2π and a resolution of 12% fwhm for the 14.4 keV line of 57 Fe, and is easy to manufacture. (Auth.)
An Axiomatization of the Proportional Rule in Financial Networks
Csoka, Péter; Herings, P. Jean-Jacques
2017-01-01
The most important rule to determine payments in real-life bankruptcy problems is the proportional rule. Many bankruptcy problems are characterized by network aspects and default may occur as a result of contagion. Indeed, in financial networks with defaulting agents, the values of the agents'
Spatially tuned normalization explains attention modulation variance within neurons.
Ni, Amy M; Maunsell, John H R
2017-09-01
Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Directory of Open Access Journals (Sweden)
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
Shashank, R.; Harisha, S. K., Dr; Abhishek, M. C.
2018-02-01
Energy harvesting using ambient energy sources is one of the fast growing trends in the world, research and development in the area of energy harvesting is moving progressively to get maximum power output from the existing resources. The ambient sources of energy available in the nature are solar energy, wind energy, thermal energy, vibrational energy etc. out of these methods energy harvesting by vibrational energy sources gain more importance due to its nature of not getting influenced by any environmental parameters and its free availability at anytime and anywhere. The project mainly deals with validating the values of voltage and electrical power output of experimentally conducted energy harvester, varying the parameters of the energy harvester and analyse the effect of the parameters on the performance of the energy harvester and compare the results. The cantilever beam was designed, analysed and simulated using COMSOL multi-physics software. The energy harvester gives an electrical output voltage of the 2.75 volts at a natural frequency of 37.2 Hz and an electrical power of 29μW. Decreasing the percentage of the piezoelectric material and simultaneously increasing the percentage of polymer material (so that total percentage of proportion remains same) increases the electrical voltage and decreases the natural frequency of the beam linearly upto 3.9V and 28.847 Hz till the percentage proportion of the beam was 24% piezoelectric beam and 76% polymer beam when the percentage proportion increased to 26% and 74% natural frequency goes on decreases further but voltage suddenly drops to 2.8V. The voltage generated by energy harvester increases proportionally and reaches 3.7V until weight of the proof mass reaches 4 grams and further increase in the weight of the proof mass decreases the voltage generated by energy harvester. Thus the investigation conveys that the weight of the proof mass and the length of the cantilever beam should be optimised to obtain maximum
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Risk Management - Variance Minimization or Lower Tail Outcome Elimination
DEFF Research Database (Denmark)
Aabo, Tom
2002-01-01
on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....
Draft no-migration variance petition. Volume 1
International Nuclear Information System (INIS)
1995-01-01
The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is Volume 1 which discusses the regulatory frame work, site characterization, facility description, waste description, environmental impact analysis, monitoring, quality assurance, long-term compliance analysis, and regulatory compliance assessment
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Batch variation between branchial cell cultures: An analysis of variance
DEFF Research Database (Denmark)
Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.
2003-01-01
We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
Directory of Open Access Journals (Sweden)
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Minimum variance linear unbiased estimators of loss and inventory
International Nuclear Information System (INIS)
Stewart, K.B.
1977-01-01
The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs
Cosmic variance in inflation with two light scalars
Energy Technology Data Exchange (ETDEWEB)
Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah, E-mail: bpb165@psu.edu, E-mail: suddhasattwa.brahma@gmail.com, E-mail: asdeutsch@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos and Physics Department, The Pennsylvania State University, University Park, PA, 16802 (United States)
2016-05-01
We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.
Frick, Andrea; Möhring, Wenke
2016-01-01
Recent research has shown close links between spatial and mathematical thinking and between spatial abilities and motor skills. However, longitudinal research examining the relations between motor, spatial, and mathematical skills is rare, and the nature of these relations remains unclear. The present study thus investigated the relation between children’s motor control and their spatial and proportional reasoning. We measured 6-year-olds’ spatial scaling (i.e., the ability to reason about different-sized spaces), their mental transformation skills, and their ability to balance on one leg as an index for motor control. One year later (N = 126), we tested the same children’s understanding of proportions. We also assessed several control variables (verbal IQ and socio-economic status) as well as inhibitory control, visuo-spatial and verbal working memory. Stepwise hierarchical regressions showed that, after accounting for effects of control variables, children’s balance skills significantly increased the explained variance in their spatial performance and proportional reasoning. Our results suggest specific relations between balance skills and spatial as well as proportional reasoning skills that cannot be explained by general differences in executive functioning or intelligence. PMID:26793157
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
International Nuclear Information System (INIS)
Garg, S.P.; Sharma, R.C.
1984-01-01
The X-ray response of single wire anode gas scintillation proportional counters of two different geometries operated with argon+nitrogen gases in continuous flow has been investigated with wire anodes of diameters 25 μm to 1.7 mm. An energy resolution of 19% is obtained for 5.9 keV X-rays entering the counter perpendicular to the anode in pill-box geometry with 25 μm diameter anode. With cylindrical geometry counters energy obtained at 5.9 keV are 18%, 24% and 33% for 50 μm, 0.5 mm and 1.7 mm diameter anodes respectively. An analysis of the observed resolution shows that the contribution from photon counting statistics to the relative variance of scintillation pulses even for X-rays in Ar-N 2 single wire anode gas scintillation proportional counters is small and is not a limiting factor. The energy resolution with thicker anodes, where the contribution from the variance of the charge multiplication factor also has been minimised, is found to deteriorate mainly by the interaction in the scintillation production region. Comments are made on the possibility of improvement in energy resolution by suppression of pulses due to such interactions with the help of the pulse risetime discrimination technique. (orig.)
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Directory of Open Access Journals (Sweden)
Broberg Per
2005-08-01
Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information
Visualizing Proportions and Dissimilarities by Space-filling Maps
DEFF Research Database (Denmark)
Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero
2017-01-01
In this paper we address the problem of visualizing a set of individuals, which have attached a statistical value given as a proportion, and a dissimilarity measure. Each individual is represented as a region within the unit square, in such a way that the area of the regions represent...... the proportions and the distances between them represent the dissimilarities. To enhance the interpretability of the representation, the regions are required to satisfy two properties. First, they must form a partition of the unit square, namely, the portions in which it is divided must cover its area without...... is solved heuristically by using the Large Neighborhood Search technique. The methodology proposed in this paper is applied to three real-world datasets: the first one concerning financial markets in Europe and Asia, the second one about the letters in the English alphabet, and finally the provinces...
Combining proportional and majoritarian democracy: An institutional design proposal
Directory of Open Access Journals (Sweden)
Steffen Ganghof
2016-08-01
Full Text Available The article proposes a new way to combine the “proportional” and “majoritarian” visions of democracy. The proposal blends elements of mixed electoral systems, parliamentarism, presidentialism and bicameralism. Voters are given a single vote to make two simultaneous choices: one about the proportional composition of the legislature and one about the two top parties forming a majoritarian “confidence chamber” embedded within the legislature. Only the majority in this chamber has the power to dismiss the cabinet in a vote of no-confidence. The proposed system virtually guarantees the feasibility of identifiable and stable one-party cabinets governing with shifting, issue-specific majorities in a highly proportional legislature. It is illustrated with respect to the 2013 federal election in Germany.
Measurements of electron attachment by oxygen molecule in proportional counter
Energy Technology Data Exchange (ETDEWEB)
Tosaki, M., E-mail: tosaki.mitsuo.3v@kyoto-u.ac.jp [Radioisotope Research Center, Kyoto University, Kyoto 606-8501 (Japan); Kawano, T. [National Institute for Fusion Science, 322-6 Oroshi, Toki 509-5292 (Japan); Isozumi, Y. [Radioisotope Research Center, Kyoto University, Kyoto 606-8501 (Japan)
2013-11-15
We present pulse height measurements for 5-keV Auger electrons from a radioactive {sup 55}Fe source mounted at the inner cathode surface of cylindrical proportional counter, which is operated with CH{sub 4} admixed dry air or N{sub 2}. A clear shift of the pulse height has been observed by varying the amount of the admixtures; the number of electrons, created in the primary ionization by Auger electrons, is decreased by the electron attachment of the admixtures during their drift from the place near the source to the anode wire. The large gas amplification (typically 10{sup 4}) in the secondary ionization of proportional counter makes it possible to investigate a small change in the number of primary electrons. The electron attenuation cross-section of O{sub 2} has been evaluated by analyzing the shifts of the pulse height caused by the electron attachment to dry air and N{sub 2}.
Nonlinear kinematic hardening under non-proportional loading
International Nuclear Information System (INIS)
Ottosen, N.S.
1979-07-01
Within the framework of conventional plasticity theory, it is first determined under which conditions Melan-Prager's and Ziegler's kinematic hardening rules result in identical material behaviour. Next, assuming initial isotropy and adopting the von Mises yield criterion, a nonlinear kinematic hardening function is proposed for prediction of metal behaviour. The model assumes that hardening at a specific stress point depends on the direction of the new incremental loading. Hereby a realistic response is obtained for general reversed loading, and a smooth behaviour is assured, even when loading deviates more and more from proportional loading and ultimately results in reversed loading. The predictions of the proposed model for non-proportional loading under plane stress conditions are compared with those of the classical linear kinematic model, the isotropic model and with published experimental data. Finally, the limitations of the proposaed model are discussed. (author)
DEVELOPMENT OF HETEROGENEOUS PROPORTIONAL COUNTERS FOR NEUTRON DOSIMETRY.
Forouzan, Faezeh; Waker, Anthony J
2018-01-10
The use of a custom-made cylindrical graphite proportional counter (Cy-GPC) along with a cylindrical tissue equivalent proportional counter (TEPC) for neutron-gamma mixed-field dosimetry has been studied in the following steps: first, the consistency of the gamma dose measurement between the Cy-TEPC and the Cy-GPC was investigated over a range of 20 keV (X-ray) to 0.661 MeV (Cs-137 gamma ray). Then, with both the counters used simultaneously, the neutron and gamma ray doses produced by a P385 Neutron Generator (Thermo Fisher Scientific) together with a Cs-137 gamma source were determined. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
New proportional counter assembly in Gliwice 14C laboratory
International Nuclear Information System (INIS)
Moscicki, W.; Zastawny, A.
1977-01-01
The design and parameters are described of a proportional counter for low level counting. The cathode tube 80 mm in diameter and 30 cm in length is made of pure copper. The anode is a tungsten wire 0.05 mm in diameter. The cathode tube is surrounded by a cylindrical ring container with mercury. The total volume of the counter is 1.5 l and it is filled with carbon dioxide. At a pressure of 1 at of CO 2 the counter background is 4.20+-0.05 cpm and contemporary 14 C net effect 10.22+-0.10 cpm; at a pressure of 2 at of CO 2 the background is 4.40+-0.05 cpm and the contemporary 14 C net effect 20.53+-10 cpm. The efficiency of the proportional counter is 88% in both cases. (J.B.)
Genetics of human body size and shape: body proportions and indices.
Livshits, Gregory; Roset, A; Yakovenko, K; Trofimov, S; Kobyliansky, E
2002-01-01
The study of the genetic component in morphological variables such as body height and weight, head and chest circumference, etc. has a rather long history. However, only a few studies investigated body proportions and configuration. The major aim of the present study was to evaluate the extent of the possible genetic effects on the inter-individual variation of a number of body configuration indices amenable to clear functional interpretation. Two ethnically different pedigree samples were used in the study: (1) Turkmenians (805 individuals) from Central Asia, and (2) Chuvasha (732 individuals) from the Volga riverside, Russian Federation. To achieve the aim of the present study we proposed three new indices, which were subjected to a statistical-genetic analysis using modified version of "FISHER" software. The proposed indices were: (1) an integral index of torso volume (IND#1), an index reflecting a predisposition of body proportions to maintain a balance in a vertical position (IND#2), and an index of skeletal extremities volume (IND#3). Additionally, the first two principal factors (PF1 and PF2) obtained on 19 measurements of body length and breadth were subjected to genetic analysis. Variance decomposition analysis that simultaneously assess the contribution of gender, age, additive genetic effects and effects of environment shared by the nuclear family members, was applied to fit variation of the above three indices, and PF1 and PF2. The raw familial correlation of all study traits and in both samples showed: (1) all marital correlations did not differ significantly from zero; (2) parent-offspring and sibling correlations were all positive and statistically significant. The parameter estimates obtained in variance analyses showed that from 40% to 75% of inter-individual variation of the studied traits (adjusted for age and sex) were attributable to genetic effects. For PF1 and PF2 in both samples, and for IND#2 (in Chuvasha pedigrees), significant common sib
Sexually dimorphic proportions of the harbour porpoise (Phocoena phocoena) skeleton
DEFF Research Database (Denmark)
Galatius, Anders
2005-01-01
Sexual differences in growth, allometric growth patterns and skeletal proportions were investigated by linear measurements of skeletal parts on 225 harbour porpoises (Phocoena phocoena) from the inner Danish and adjacent waters. Females show larger asymptotic sizes and extended period of growth...... to females is also found in the vertebral epiphyses that mature later in males than females, although the males finish growth at a younger age....
Multiwire proportional counter (lecture by an electromagnetic delay line)
International Nuclear Information System (INIS)
Bruere-Dawson, R.
1989-01-01
For track localisation of ionizing particles with multiwire proportional chamber, an electronic chain including amplifying, shaping and memorizing circuits is required for each wire. In order to lower the cost of this type of detector, an electromagnetic delay line is proposed among various possibilities. In this paper, different coupling modes between chamber and delay line are studied with their respective advantages. The realization of one meter long delay line with a unit delay time of 15 ns per cm is also presented [fr
Quench gases for xenon- (and krypton-)filled proportional counters
International Nuclear Information System (INIS)
Ramsey, B.D.; Agrawal, P.C.
1988-01-01
Xenon-filled proportional counters are used extensively in astronomy, particularly in the hard X-ray region. The choice of quench gas can have a significant effect on the operating characteristics of the instrument although the data necessary to make the choice are not easily obtainable. We present results which detail the performance obtained from both cylindrical and parallel field geometries for a wide variety of readily available, ultrahigh or research grade purity, quench gases. (orig.)
Azimuthal spread of the avalanche in proportional chambers
International Nuclear Information System (INIS)
Okuno, H.; Fischer, J.; Radeka, V.; Walenta, A.H.
1978-10-01
The angular distribution of the avalanche around the anode wire in the gas proportional counter is determined by measuring the distribution of positive ions arriving on cathode strips surrounding the anode wire for each single event. The shape and width of the distribution depend on such factors as the gas gain, the anode diameter, the counting gas and the primary ionization density. Effects of these factors are studied systematically, and their importance for practical counter applications is discussed
Metrology and Proportion in the Ecclesiastical Architecture of Medieval Ireland
Behan, Avril; Moss, Rachel
2008-01-01
The aim of this paper is to examine the extent to which detailed empirical analysis of the metrology and proportional systems used in the design of Irish ecclesiastical architecture can be analysed to provide historical information not otherwise available. Focussing on a relatively limited sample of window tracery designs as a case study, it will first set out to establish what, if any, systems were in use, and then what light these might shed on the background, training and work practices of...
Data-driven smooth tests of the proportional hazards assumption
Czech Academy of Sciences Publication Activity Database
Kraus, David
2007-01-01
Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007
Populational Growth Models Proportional to Beta Densities with Allee Effect
Aleixo, Sandra M.; Rocha, J. Leonel; Pestana, Dinis D.
2009-05-01
We consider populations growth models with Allee effect, proportional to beta densities with shape parameters p and 2, where the dynamical complexity is related with the Malthusian parameter r. For p>2, these models exhibit a population dynamics with natural Allee effect. However, in the case of 1
Fast multiwire proportional chamber data encoding system for proton tomography
International Nuclear Information System (INIS)
Brown, D.
1979-01-01
A data encoding system that rapidly generates the binary address of an active wire in a 512-wire multiwire proportional chamber has been developed. It can accept a second event on a different wire after a deadtime of 130 ns. The system incorporates preprocessing of the wire data to reject events that would require more than one wire address. It also includes a first-in, first-out memory to buffer the data flow
School Planning & Management, 2002
2002-01-01
Several architects, planners, administrators, and contractors answer questions about trends related to school construction, interior design, business, security, and technology. Trends concern funding issues, specialized designs, planning for safety, technological integration, and equity in services. (EV)
Circle, Alison
2009-01-01
This article identifies 13 cultural trends that libraries can turn into opportunites to reach patrons. These trends include: Twitter, online reputation management, value added content, mobile marketing, and emotional connection.
... the Biggest Cancer Killer in Both Men and Women” Stay Informed Trends for Other Kinds of Cancer Breast Cervical Colorectal (Colon) Ovarian Prostate Skin Cancer Home Lung Cancer Trends Language: English Español (Spanish) Recommend ...
Relating the Electrical Resistance of Fresh Concrete to Mixture Proportions.
Obla, K; Hong, R; Sherman, S; Bentz, D P; Jones, S Z
2018-01-01
Characterization of fresh concrete is critical for assuring the quality of our nation's constructed infrastructure. While fresh concrete arriving at a job site in a ready-mixed concrete truck is typically characterized by measuring temperature, slump, unit weight, and air content, here the measurement of the electrical resistance of a freshly cast cylinder of concrete is investigated as a means of assessing mixture proportions, specifically cement and water contents. Both cement and water contents influence the measured electrical resistance of a sample of fresh concrete: the cement by producing ions (chiefly K + , Na + , and OH - ) that are the main source of electrical conduction; and the water by providing the main conductive pathways through which the current travels. Relating the measured electrical resistance to attributes of the mixture proportions, such as water-cement ratio by mass ( w/c ), is explored for a set of eleven different concrete mixtures prepared in the laboratory. In these mixtures, w/c , paste content, air content, fly ash content, high range water reducer dosage, and cement alkali content are all varied. Additionally, concrete electrical resistance data is supplemented by measuring the resistivity of its component pore solution obtained from 5 laboratory-prepared cement pastes with the same proportions as their corresponding concrete mixtures. Only measuring the concrete electrical resistance can provide a prediction of the mixture's paste content or the product w*c ; conversely, when pore solution resistivity is also available, w/c and water content of the concrete mixture can be reasonably assessed.
Proportionality in enterprise development of South African towns
Directory of Open Access Journals (Sweden)
Maitland T. Seaman
2012-05-01
Full Text Available We investigated proportionalities in the enterprise structures of 125 South African towns through examining four hypotheses, (1 the magnitude of enterprise development in a town is a function of the population size of the town; (2 the size of an enterprise assemblage of a town is a function of the town’s age; (3 there are statistically significant relationships, and hence proportionalities, between the total number of enterprises in towns and some, if not all, of the enterprise numbers of different business sectors in towns; and (4 the implications of proportionalities have far-reaching implications for rural development and job creation. All hypotheses were accepted on the basis of statistically significant (p < 0.05 correlations, except for the second hypothesis – the age of a town does not determine the size of its enterprise assemblage. Analysis for the fourth hypothesis suggested that there are two broad entrepreneurial types in South African towns: ‘run-of-the-mill’ entrepreneurs and ‘special’ entrepreneurs, which give rise to different enterprise development dynamics. ‘Run-of-the-mill’ enterprises are dependent on, and limited by, local demand and if there is only a small demand, the entrepreneurial space is small. By comparison, ‘special’ enterprises have much larger markets because their products and/or services are exportable. We propose that the fostering of ‘special’ entrepreneurs is an imperative for local economic development in South African towns.
Contributions to the methodology of multiwire proportional chambers
International Nuclear Information System (INIS)
Petrascu, H.
1993-01-01
The Ph.D. thesis presents first the realization, testing, optimization, and use of detection equipment based on position sensitive multiwire proportional chambers (MWPC), high resolution proportional counters, and of ΔE,E ionization chambers. In the second chapter it is presented the realization MWPC, in which the coordinate information is obtained by means of LC-delay lines, containing many original constructive elements. By using a system of three MWPC in coincidence, an experimental testing of the theoretically predicted pion generation in the 235 U fission process, was performed. An upper limit of 10 -11 for this process was found. In the 3-rd chapter there are presented the developments of high resolution proportional counters for X-ray spectrometry. Various penning mixtures of high purity gases were studied. The purity of gases was assured by a technology described in a Romanian patents. These counters are currently use in various applications as a rapid analysis of steel marks and in the mining industry. the 4-th chapter is dedicated to the construction and using of ΔE,E closed, high resolution ionization chambers. With these chambers the multinucleon transfer in the 27 Al( 14 M, X) reaction at 116 MeV bombarding energy was investigated. Also this type of chambers was used for the elaboration of an absolute method for analysis and profiling of impurities in silicon wafers. This method is described in the last part of the chapter. (Author) 117 Figs., 8 Tabs., 55 Refs
An extended parametrization of gas amplification in proportional wire chambers
International Nuclear Information System (INIS)
Beingessner, S.P.; Carnegie, R.K.; Hargrove, C.K.
1987-01-01
It is normally assumed that the gas amplification in proportional chambers is a function of Townsend's first ionization coefficient, α, and that α is a function of the anode surface electric field only. Experimental measurements are presented demonstrating the breakdown of the latter assumption for electric fields, X, greater than about 150 V/cm/Torr on the anode wire surface for a gas mixture of 80/20 argon/methane. For larger values of X, the parametrization of the proportional gas gain data requires an additional term related to the gradient of the electric field near the wire. This extended gain parametrization remains valid until the onset of nonproportional contributions such as positive ion space charge saturation effects. Furthermore, deviations of the data from this parametrization are used to measure the onset of these space charge effects. A simple scaling dependence of the gain data on the product of pressure and wire radius over the whole proportional range is also demonstrated. (orig.)
Duality between resource reservation and proportional share resource allocation
Stoica, Ion; Abdel-Wahab, Hussein; Jeffay, Kevin
1997-01-01
We describe anew framework for resource allocation that unifies the well-known proportional share and resource reservation policies. Each client is characterized by two parameters: a weight that represents the rate at which the client 'pays' for the resource, and a share that represents the fraction of the resource that the client should receive. A fixed rate corresponds to a proportional share allocation, while a fixed share corresponds to a reservation. Furthermore, rates and shares are duals of each other. Once one parameters is fixed the other becomes fixed as well. If a client asks for a fixed share then the level of competition for the resource determines the rate at which it has to pay, while if the rate is fixed, level of competition determines the service time the clients should receive. To implement this framework we use a new proportional share algorithm, called earliest eligible virtual deadline first, that achieves optical accuracy in the rates at which process execute. This makes it possible to provide support for highly predictable, real-time services. As a proof of concept we have implemented a prototype of a CPU scheduler under the FreeBSD operating system. The experimental results show that our scheduler achieves the goal of providing integrated support for batch and real-time applications.
Body physique and proportionality of Brazilian female artistic gymnasts.
Bacciotti, Sarita; Baxter-Jones, Adam; Gaya, Adroaldo; Maia, José
2018-04-01
This study aimed to identify physique characteristics (anthropometry, somatotype, body proportionality) of Brazilian female artistic gymnasts, and to compare them across competitive levels (sub-elite versus non-elite) within competitive age-categories. Two hundred forty-nine female gymnasts (68 sub-elite; 181 non-elite) from 26 Brazilian gymnastics clubs, aged 9-20 years and split into four age-categories, were sampled. Gymnasts were assessed for 16 anthropometric traits (height, weight, lengths, widths, girths, and skinfolds); somatotype was determined according to Heath-Carter method, body fat was estimated by bioimpedance, and proportionality was computed based on the z-phantom strategy. Non-elite and sub-elite gymnasts had similar values in anthropometric characteristics, however non-elite had higher fat folds in all age-categories (P < 0.01). In general, mesomorphy was the salient somatotype component in all age-categories, and an increase in endomorphy, followed by a decrease in ectomorphy across age was observed. Regarding proportionality, profile similarity was found between sub-elite and non-elite within age-categories. In conclusion results suggest the presence of a typical gymnast's physical prototype across age and competitive level, which can be useful to coaches during their selection processes in clubs and regional/national teams.
Operation and application of tissue-equivalent proportional counters
International Nuclear Information System (INIS)
Gerdung, S.; Roos, H.
1995-01-01
The application of TEPCs during the past decades in dosimetry, radiation protection and radiation therapy has revealed their large potential but also the necessity of careful operation. This paper reviews the experience gathered in the past and summarises the experimental procedures to ensure proper TEPC operation. The measurement system is described including detector, electronics and quality assurance. The pulse height analysis and its interpretation in terms of microdosimetric spectra and mean values are presented as well as the variance method. On the basis of these evaluation procedures, the second part of the paper presents some typical examples of TEPC applications: dose spectrometry, time-of-flight techniques and the measurement of dose equivalent quantities. Special attention is paid to possible extensions but also to limitations of the use of TEPCs in the various fields of application. (Author)
LDPC Codes with Minimum Distance Proportional to Block Size
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Ma, Hui-qiang
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
The pricing of long and short run variance and correlation risk in stock returns
Cosemans, M.
2011-01-01
This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk