WorldWideScience

Sample records for variance coefficient map

  1. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  2. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  3. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  4. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  5. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  6. A new population growth map with variable coefficients

    International Nuclear Information System (INIS)

    Jannussis, A.

    1986-01-01

    In the present paper it is investigated a simple population growth map with variable coefficients. Moreover, it is studied the new population map of the form xsub(j+1) = axsub(j) (1/(1 + bxsub(j)) -1/(1 + cxsub(j))), c not= b, j = 0, 1,..., which is transformed in an equivalent logistic map

  7. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    Science.gov (United States)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  8. Highlighting material structure with transmission electron diffraction correlation coefficient maps

    International Nuclear Information System (INIS)

    Kiss, Ákos K.; Rauch, Edgar F.; Lábár, János L.

    2016-01-01

    Correlation coefficient maps are constructed by computing the differences between neighboring diffraction patterns collected in a transmission electron microscope in scanning mode. The maps are shown to highlight material structural features like grain boundaries, second phase particles or dislocations. The inclination of the inner crystal interfaces are directly deduced from the resulting contrast. - Highlights: • We propose a novel technique to image the structure of polycrystalline TEM-samples. • Correlation coefficients maps highlights the evolution of the diffracting signal. • 3D views of grain boundaries are provided for nano-particles or polycrystals.

  9. Effect of noise on the standard mapping

    International Nuclear Information System (INIS)

    Karney, C.F.F.; Rechester, A.B.; White, R.B.

    1981-03-01

    The effect of a small amount of noise on the standard mapping is considered. Whenever the standard mapping possesses accelerator models (where the action increases approximately linearly with time), the diffusion coefficient contains a term proportional to the reciprocal of the variance of the noise term. At large values of the stochasticity parameter, the accelerator modes exhibit a universal behavior. As a result the dependence of the diffusion coefficient on stochasticity parameter also shows some universal behavior

  10. Highlighting material structure with transmission electron diffraction correlation coefficient maps.

    Science.gov (United States)

    Kiss, Ákos K; Rauch, Edgar F; Lábár, János L

    2016-04-01

    Correlation coefficient maps are constructed by computing the differences between neighboring diffraction patterns collected in a transmission electron microscope in scanning mode. The maps are shown to highlight material structural features like grain boundaries, second phase particles or dislocations. The inclination of the inner crystal interfaces are directly deduced from the resulting contrast. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    Science.gov (United States)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  12. Determination of dispersion coefficients in the River Plate

    International Nuclear Information System (INIS)

    Maggio, G.E.; Graino, J.G.; Kopp, U.I.; Tripoli, C.R.

    1987-01-01

    The determination of dispersion coefficients of contaminants through a radioactive tracer was performed as a contribution to the development of a mathematical model for a zone of the River Plate, close to the effluent discharge. During March 1987, six operations of tracer (I-131) injection and follow-up were carried out. The injection was performed by breaking a bulb under water and the follow-up of the 'radioactive spot' was done by means of a boat. Once the 'radioactive spot' was located (approximately 2 hours after the injection) a series of transversal movements over it was effected, measuring the activity concentration by means of a submerged detector. At the same time the coordinates of each point were determined in order to draw a map of the activity distribution. This procedure was repeated for different spot positions. This set of data can be plotted on a map of the zone under study, so as to obtain a set of iso activity curves. However, these curves would be representative provided that corrections are made for the boat speed, the water speed and the half-life of radionuclide. From each set of iso activity curves, the variance and the increase of variance, as well as the dispersion coefficients, can be determined. This procedure was applied to each one of the six above mentioned operations. Presently, different values of dispersion coefficient are available for different river conditions. These values, together with other parameters, such as wind velocity, temperature, salinity, bacterial behaviour, etc., will allow the calibration of the mathematical model. (Author)

  13. Validity of the CT to attenuation coefficient map conversion methods

    International Nuclear Information System (INIS)

    Faghihi, R.; Ahangari Shahdehi, R.; Fazilat Moadeli, M.

    2004-01-01

    The most important commercialized methods of attenuation correction in SPECT are based on attenuation coefficient map from a transmission imaging method. The transmission imaging system can be the linear source of radioelement or a X-ray CT system. The image of transmission imaging system is not useful unless to replacement of the attenuation coefficient or CT number with the attenuation coefficient in SPECT energy. In this paper we essay to evaluate the validity and estimate the error of the most used method of this transformation. The final result shows that the methods which use a linear or multi-linear curve accept a error in their estimation. The value of mA is not important but the patient thickness is very important and it can introduce a error more than 10 percent in the final result

  14. High speed friction microscopy and nanoscale friction coefficient mapping

    International Nuclear Information System (INIS)

    Bosse, James L; Lee, Sungjun; Huey, Bryan D; Andersen, Andreas Sø; Sutherland, Duncan S

    2014-01-01

    As mechanical devices in the nano/micro length scale are increasingly employed, it is crucial to understand nanoscale friction and wear especially at technically relevant sliding velocities. Accordingly, a novel technique has been developed for friction coefficient mapping (FCM), leveraging recent advances in high speed AFM. The technique efficiently acquires friction versus force curves based on a sequence of images at a single location, each with incrementally lower loads. As a result, true maps of the coefficient of friction can be uniquely calculated for heterogeneous surfaces. These parameters are determined at a scan velocity as fast as 2 mm s −1 for microfabricated SiO 2 mesas and Au coated pits, yielding results that are identical to traditional speed measurements despite being ∼1000 times faster. To demonstrate the upper limit of sliding velocity for the custom setup, the friction properties of mica are reported from 200 µm s −1 up to 2 cm s −1 . While FCM is applicable to any AFM and scanning speed, quantitative nanotribology investigations of heterogeneous sliding or rolling components are therefore uniquely possible, even at realistic velocities for devices such as MEMS, biological implants, or data storage systems. (paper)

  15. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  16. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  18. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  19. Apparent diffusion coefficient mapping in medulloblastoma predicts non-infiltrative surgical planes.

    Science.gov (United States)

    Marupudi, Neena I; Altinok, Deniz; Goncalves, Luis; Ham, Steven D; Sood, Sandeep

    2016-11-01

    An appropriate surgical approach for posterior fossa lesions is to start tumor removal from areas with a defined plane to where tumor is infiltrating the brainstem or peduncles. This surgical approach minimizes risk of damage to eloquent areas. Although magnetic resonance imaging (MRI) is the current standard preoperative imaging obtained for diagnosis and surgical planning of pediatric posterior fossa tumors, it offers limited information on the infiltrative planes between tumor and normal structures in patients with medulloblastomas. Because medulloblastomas demonstrate diffusion restriction on apparent diffusion coefficient map (ADC map) sequences, we investigated the role of ADC map in predicting infiltrative and non-infiltrative planes along the brain stem and/or cerebellar peduncles by medulloblastomas prior to surgery. Thirty-four pediatric patients with pathologically confirmed medulloblastomas underwent surgical resection at our facility from 2004 to 2012. An experienced pediatric neuroradiologist reviewed the brain MRIs/ADC map, assessing the planes between the tumor and cerebellar peduncles/brain stem. An independent evaluator documented surgical findings from operative reports for comparison to the radiographic findings. The radiographic findings were statistically compared to the documented intraoperative findings to determine predictive value of the test in identifying tumor infiltration of the brain stem cerebellar peduncles. Twenty-six patients had preoperative ADC mapping completed and thereby, met inclusion criteria. Mean age at time of surgery was 8.3 ± 4.6 years. Positive predictive value of ADC maps to predict tumor invasion of the brain stem and cerebellar peduncles ranged from 69 to 88 %; negative predictive values ranged from 70 to 89 %. Sensitivity approached 93 % while specificity approached 78 %. ADC maps are valuable in predicting the infiltrative and non-infiltrative planes along the tumor and brain stem interface in

  20. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  1. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    Science.gov (United States)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  3. Mapping Thermal Expansion Coefficients in Freestanding 2D Materials at the Nanometer Scale

    Science.gov (United States)

    Hu, Xuan; Yasaei, Poya; Jokisaari, Jacob; Öǧüt, Serdar; Salehi-Khojin, Amin; Klie, Robert F.

    2018-02-01

    Two-dimensional materials, including graphene, transition metal dichalcogenides and their heterostructures, exhibit great potential for a variety of applications, such as transistors, spintronics, and photovoltaics. While the miniaturization offers remarkable improvements in electrical performance, heat dissipation and thermal mismatch can be a problem in designing electronic devices based on two-dimensional materials. Quantifying the thermal expansion coefficient of 2D materials requires temperature measurements at nanometer scale. Here, we introduce a novel nanometer-scale thermometry approach to measure temperature and quantify the thermal expansion coefficients in 2D materials based on scanning transmission electron microscopy combined with electron energy-loss spectroscopy to determine the energy shift of the plasmon resonance peak of 2D materials as a function of sample temperature. By combining these measurements with first-principles modeling, the thermal expansion coefficients (TECs) of single-layer and freestanding graphene and bulk, as well as monolayer MoS2 , MoSe2 , WS2 , or WSe2 , are directly determined and mapped.

  4. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  5. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  6. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  7. Ischemic lesion volume determination on diffusion weighted images vs. apparent diffusion coefficient maps.

    Science.gov (United States)

    Bråtane, Bernt Tore; Bastan, Birgul; Fisher, Marc; Bouley, James; Henninger, Nils

    2009-07-07

    Though diffusion weighted imaging (DWI) is frequently used for identifying the ischemic lesion in focal cerebral ischemia, the understanding of spatiotemporal evolution patterns observed with different analysis methods remains imprecise. DWI and calculated apparent diffusion coefficient (ADC) maps were serially obtained in rat stroke models (MCAO): permanent, 90 min, and 180 min temporary MCAO. Lesion volumes were analyzed in a blinded and randomized manner by 2 investigators using (i) a previously validated ADC threshold, (ii) visual determination of hypointense regions on ADC maps, and (iii) visual determination of hyperintense regions on DWI. Lesion volumes were correlated with 24 hour 2,3,5-triphenyltetrazoliumchloride (TTC)-derived infarct volumes. TTC-derived infarct volumes were not significantly different from the ADC and DWI-derived lesion volumes at the last imaging time points except for significantly smaller DWI lesions in the pMCAO model (p=0.02). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived lesions on ADC maps than DWI (pdetermined lesion volumes on ADC maps and DWI by both investigators correlated significantly with threshold-derived lesion volumes on ADC maps with the former method demonstrating a stronger correlation. There was also a better interrater agreement for ADC map analysis than for DWI analysis. Ischemic lesion determination by ADC was more accurate in final infarct prediction, rater independent, and provided exclusive information on ischemic lesion reversibility.

  8. Myocardial T1 mapping and determination of partition coefficients at 3 tesla: comparison between gadobenate dimeglumine and gadofosveset trisodium

    Directory of Open Access Journals (Sweden)

    Marcelo Souto Nacif

    2018-01-01

    Full Text Available Abstract Objective: To compare an albumin-bound gadolinium chelate (gadofosveset trisodium and an extracellular contrast agent (gadobenate dimeglumine, in terms of their effects on myocardial longitudinal (T1 relaxation time and partition coefficient. Materials and Methods: Study subjects underwent two imaging sessions for T1 mapping at 3 tesla with a modified look-locker inversion recovery (MOLLI pulse sequence to obtain one pre-contrast T1 map and two post-contrast T1 maps (mean 15 and 21 min, respectively. The partition coefficient was calculated as ΔR1myocardium /ΔR1blood , where R1 is 1/T1. Results: A total of 252 myocardial and blood pool T1 values were obtained in 21 healthy subjects. After gadolinium administration, the myocardial T1 was longer for gadofosveset than for gadobenate, the mean difference between the two contrast agents being −7.6 ± 60 ms (p = 0.41. The inverse was true for the blood pool T1, which was longer for gadobenate than for gadofosveset, the mean difference being 56.5 ± 67 ms (p < 0.001. The partition coefficient (λ was higher for gadobenate than gadofosveset (0.41 vs. 0.33, indicating slower blood pool washout for gadofosveset than for gadobenate. Conclusion: Myocardial T1 times did not differ significantly between gadobenate and gadofosveset. At typical clinical doses of the contrast agents, partition coefficients were significantly lower for the intravascular contrast agent than for the extravascular agent.

  9. Evaluation of Release-05 GRACE time-variable gravity coefficients over the ocean

    Directory of Open Access Journals (Sweden)

    D. P. Chambers

    2012-10-01

    Full Text Available The latest release of GRACE (Gravity Recovery and Climate Experiment gravity field coefficients (Release-05, or RL05 are evaluated for ocean applications. Data have been processed using the current methodology for Release-04 (RL04 coefficients, and have been compared to output from two different ocean models. Results indicate that RL05 data from the three Science Data Centers – the Center for Space Research (CSR, GeoForschungsZentrum (GFZ, and Jet Propulsion Laboratory (JPL – are more consistent among themselves than the previous RL04 data. Moreover, the variance of residuals with the output of an ocean model is 50–60% lower for RL05 data than for RL04 data. A more optimized destriping algorithm is also tested, which improves the results slightly. By comparing the GRACE maps with two different ocean models, we can better estimate the uncertainty in the RL05 maps. We find the standard error to be about 1 cm (equivalent water thickness in the low- and mid-latitudes, and between 1.5 and 2 cm in the polar and subpolar oceans, which is comparable to estimated uncertainty for the output from the ocean models.

  10. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  11. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  12. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  13. The Effect of Nonzero Autocorrelation Coefficients on the Distributions of Durbin-Watson Test Estimator: Three Autoregressive Models

    Directory of Open Access Journals (Sweden)

    Mei-Yu LEE

    2014-11-01

    Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.

  14. Assessment of histological differentiation in gastric cancers using whole-volume histogram analysis of apparent diffusion coefficient maps.

    Science.gov (United States)

    Zhang, Yujuan; Chen, Jun; Liu, Song; Shi, Hua; Guan, Wenxian; Ji, Changfeng; Guo, Tingting; Zheng, Huanhuan; Guan, Yue; Ge, Yun; He, Jian; Zhou, Zhengyang; Yang, Xiaofeng; Liu, Tian

    2017-02-01

    To investigate the efficacy of histogram analysis of the entire tumor volume in apparent diffusion coefficient (ADC) maps for differentiating between histological grades in gastric cancer. Seventy-eight patients with gastric cancer were enrolled in a retrospective 3.0T magnetic resonance imaging (MRI) study. ADC maps were obtained at two different b values (0 and 1000 sec/mm 2 ) for each patient. Tumors were delineated on each slice of the ADC maps, and a histogram for the entire tumor volume was subsequently generated. A series of histogram parameters (eg, skew and kurtosis) were calculated and correlated with the histological grade of the surgical specimen. The diagnostic performance of each parameter for distinguishing poorly from moderately well-differentiated gastric cancers was assessed by using the area under the receiver operating characteristic curve (AUC). There were significant differences in the 5 th , 10 th , 25 th , and 50 th percentiles, skew, and kurtosis between poorly and well-differentiated gastric cancers (P histogram parameters, including the 10 th percentile, skew, kurtosis, and max frequency; the correlation coefficients were 0.273, -0.361, -0.339, and -0.370, respectively. Among all the histogram parameters, the max frequency had the largest AUC value, which was 0.675. Histogram analysis of the ADC maps on the basis of the entire tumor volume can be useful in differentiating between histological grades for gastric cancer. 4 J. Magn. Reson. Imaging 2017;45:440-449. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Mapping Surface Water DOC in the Northern Gulf of Mexico Using CDOM Absorption Coefficients and Remote Sensing Imagery

    Science.gov (United States)

    Kelly, B.; Chelsky, A.; Bulygina, E.; Roberts, B. J.

    2017-12-01

    Remote sensing techniques have become valuable tools to researchers, providing the capability to measure and visualize important parameters without the need for time or resource intensive sampling trips. Relationships between dissolved organic carbon (DOC), colored dissolved organic matter (CDOM) and spectral data have been used to remotely sense DOC concentrations in riverine systems, however, this approach has not been applied to the northern Gulf of Mexico (GoM) and needs to be tested to determine how accurate these relationships are in riverine-dominated shelf systems. In April, July, and October 2017 we sampled surface water from 80+ sites over an area of 100,000 km2 along the Louisiana-Texas shelf in the northern GoM. DOC concentrations were measured on filtered water samples using a Shimadzu TOC-VCSH analyzer using standard techniques. Additionally, DOC concentrations were estimated from CDOM absorption coefficients of filtered water samples on a UV-Vis spectrophotometer using a modification of the methods of Fichot and Benner (2011). These values were regressed against Landsat visible band spectral data for those same locations to establish a relationship between the spectral data, CDOM absorption coefficients. This allowed us to spatially map CDOM absorption coefficients in the Gulf of Mexico using the Landsat spectral data in GIS. We then used a multiple linear regressions model to derive DOC concentrations from the CDOM absorption coefficients and applied those to our map. This study provides an evaluation of the viability of scaling up CDOM absorption coefficient and remote-sensing derived estimates of DOC concentrations to the scale of the LA-TX shelf ecosystem.

  16. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  17. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  18. Linear-fitting-based similarity coefficient map for tissue dissimilarity analysis in -w magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yu Shao-De; Wu Shi-Bin; Xie Yao-Qin; Wang Hao-Yu; Wei Xin-Hua; Chen Xin; Pan Wan-Long; Hu Jiani

    2015-01-01

    Similarity coefficient mapping (SCM) aims to improve the morphological evaluation of weighted magnetic resonance imaging However, how to interpret the generated SCM map is still pending. Moreover, is it probable to extract tissue dissimilarity messages based on the theory behind SCM? The primary purpose of this paper is to address these two questions. First, the theory of SCM was interpreted from the perspective of linear fitting. Then, a term was embedded for tissue dissimilarity information. Finally, our method was validated with sixteen human brain image series from multi-echo . Generated maps were investigated from signal-to-noise ratio (SNR) and perceived visual quality, and then interpreted from intra- and inter-tissue intensity. Experimental results show that both perceptibility of anatomical structures and tissue contrast are improved. More importantly, tissue similarity or dissimilarity can be quantified and cross-validated from pixel intensity analysis. This method benefits image enhancement, tissue classification, malformation detection and morphological evaluation. (paper)

  19. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  20. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  1. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  2. Implications of NGA for NEHRP site coefficients

    Science.gov (United States)

    Borcherdt, Roger D.

    2012-01-01

    Three proposals are provided to update tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures (7-10), by the American Society of Civil Engineers (2010) (ASCE/SEI 7-10), with site coefficients implied directly by NGA (Next Generation Attenuation) ground motion prediction equations (GMPEs). Proposals include a recommendation to use straight-line interpolation to infer site coefficients at intermediate values of ̅vs (average shear velocity). Site coefficients are recommended to ensure consistency with ASCE/SEI 7-10 MCER (Maximum Considered Earthquake) seismic-design maps and simplified site-specific design spectra procedures requiring site classes with associated tabulated site coefficients and a reference site class with unity site coefficients. Recommended site coefficients are confirmed by independent observations of average site amplification coefficients inferred with respect to an average ground condition consistent with that used for the MCER maps. The NGA coefficients recommended for consideration are implied directly by the NGA GMPEs and do not require introduction of additional models.

  3. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  4. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  5. Algebraic polynomials with random coefficients

    Directory of Open Access Journals (Sweden)

    K. Farahmand

    2002-01-01

    Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.

  6. Bayesian Meta-Analysis of Coefficient Alpha

    Science.gov (United States)

    Brannick, Michael T.; Zhang, Nanhua

    2013-01-01

    The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…

  7. Comparison of field-measured radon diffusion coefficients with laboratory-measured coefficients

    International Nuclear Information System (INIS)

    Lepel, E.A.; Silker, W.B.; Thomas, V.W.; Kalkwarf, D.R.

    1983-04-01

    Experiments were conducted to compare radon diffusion coefficients determined for 0.1-m depths of soils by a steady-state method in the laboratory and diffusion coefficients evaluated from radon fluxes through several-fold greater depths of the same soils covering uranium-mill tailings. The coefficients referred to diffusion in the total pore volume of the soils and are equivalent to values for the quantity, D/P, in the Generic Environmental Impact Statement on Uranium Milling prepared by the US Nuclear Regulatory Commission. Two soils were tested: a well-graded sand and an inorganic clay of low plasticity. For the flux evaluations, radon was collected by adsorption on charcoal following passive diffusion from the soil surface and also from air recirculating through an aluminum tent over the soil surface. An analysis of variance in the flux evaluations showed no significant difference between these two collection methods. Radon diffusion coefficients evaluated from field data were statistically indistinguishable, at the 95% confidence level, from those measured in the laboratory; however, the low precision of the field data prevented a sensitive validation of the laboratory measurements. From the field data, the coefficients were calculated to be 0.03 +- 0.03 cm 2 /s for the sand cover and 0.0036 +- 0.0004 cm 2 /s for the clay cover. The low precision in the coefficients evaluated from field data was attributed to high variation in radon flux with time and surface location at the field site

  8. Linkage disequilibrium and association mapping.

    Science.gov (United States)

    Weir, B S

    2008-01-01

    Linkage disequilibrium refers to the association between alleles at different loci. The standard definition applies to two alleles in the same gamete, and it can be regarded as the covariance of indicator variables for the states of those two alleles. The corresponding correlation coefficient rho is the parameter that arises naturally in discussions of tests of association between markers and genetic diseases. A general treatment of association tests makes use of the additive and nonadditive components of variance for the disease gene. In almost all expressions that describe the behavior of association tests, additive variance components are modified by the squared correlation coefficient rho2 and the nonadditive variance components by rho4, suggesting that nonadditive components have less influence than additive components on association tests.

  9. Is fMRI ?noise? really noise? Resting state nuisance regressors remove variance with network structure

    OpenAIRE

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...

  10. Fractal diffusion coefficient from dynamical zeta functions

    Energy Technology Data Exchange (ETDEWEB)

    Cristadoro, Giampaolo [Max Planck Institute for the Physics of Complex Systems, Noethnitzer Str. 38, D 01187 Dresden (Germany)

    2006-03-10

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero. (letter to the editor)

  11. Fractal diffusion coefficient from dynamical zeta functions

    International Nuclear Information System (INIS)

    Cristadoro, Giampaolo

    2006-01-01

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero. (letter to the editor)

  12. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  13. Apparent Diffusion Coefficient Maps of Pediatric Mass Lesions with Free-Breathing Diffusion-Weighted Magnetic Resonance: Feasibility Study

    International Nuclear Information System (INIS)

    Olsen, Oe.E.; Sebire, N.J.

    2006-01-01

    Purpose: To assess the technical feasibility of apparent diffusion coefficient (ADC) mapping based on free-breathing diffusion-weighted magnetic resonance (DW-MR) outside the CNS in children. Material and Methods: Twelve children with mass lesions of varied histopathology were scanned with short-tau inversion recovery (STIR), contrast-enhanced T1-weighted (CE-T1W), and diffusion-weighted (b = 0, 500 and 1,000 s/mm 2 ) sequences. ADC maps were calculated. Lesion-to-background signal intensity ratios were measured and compared between STIR/CE-T1W/ADC overall (Friedman test) and between viable embryonal tumors and other lesions (Kruskal-Wallis test). Results: ADC maps clearly depicted all lesions. Lesion-to-background signal intensity ratios of STIR (median 3.7), CE-T1W (median 1.4), and ADC (median 1.6) showed no overall difference (chi-square = 3.846; P = 0.146), and there was no difference between viable embryonal tumors and other lesions within STIR/CE-T1W/ADC (chi-square 1.118/0.669/<0.001; P = 0.290/0.414/1.000, respectively). Conclusion: ADC mapping is feasible in free-breathing imaging of pediatric mass lesions outside the CNS using standard clinical equipment. Keywords: Diffusion-weighted magnetic resonance imaging; infants and children; neoplasms

  14. WE-AB-202-12: Voxel-Wise Analysis of Apparent Diffusion Coefficient and Perfusion Maps in Multi-Parametric MRI of Prostate Cancer

    International Nuclear Information System (INIS)

    Engstroem, K; Casares-Magaz, O; Muren, L; Roervik, J; Andersen, E

    2016-01-01

    Purpose: Multi-parametric MRI (mp-MRI) is being introduced in radiotherapy (RT) of prostate cancer, including for tumour delineation in focal boosting strategies. We recently developed an image-based tumour control probability model, based on cell density distributions derived from apparent diffusion coefficient (ADC) maps. Beyond tumour volume and cell densities, tumour hypoxia is also an important determinant of RT response. Since tissue perfusion from mp-MRI has been related to hypoxia we have explored the patterns of ADC and perfusion maps, and the relations between them, inside and outside prostate index lesions. Methods: ADC and perfusion maps from 20 prostate cancer patients were used, with the prostate and index lesion delineated by a dedicated uro-radiologist. To reduce noise, the maps were averaged over a 3×3×3 voxel cube. Associations between different ADC and perfusion histogram parameters within the prostate, inside and outside the index lesion, were evaluated with the Pearson’s correlation coefficient. In the voxel-wise analysis, scatter plots of ADC vs perfusion were analysed for voxels in the prostate, inside and outside of the index lesion, again with the associations quantified with the Pearson’s correlation coefficient. Results: Overall ADC was lower inside the index lesion than in the normal prostate as opposed to ktrans that was higher inside the index lesion than outside. In the histogram analysis, the minimum ktrans was significantly correlated with the maximum ADC (Pearson=0.47; p=0.03). At the voxel level, 15 of the 20 cases had a statistically significant inverse correlation between ADC and perfusion inside the index lesion; ten of the cases had a Pearson < −0.4. Conclusion: The minimum value of ktrans across the tumour was correlated to the maximum ADC. However, on the voxel level, the ‘local’ ktrans in the index lesion is inversely (i.e. negatively) correlated to the ‘local’ ADC in most patients. Research agreement with

  15. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  16. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  17. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  18. Crack diffusion coefficient - A candidate fracture toughness parameter for short fiber composites

    Science.gov (United States)

    Mull, M. A.; Chudnovsky, A.; Moet, A.

    1987-01-01

    In brittle matrix composites, crack propagation occurs along random trajectories reflecting the heterogeneous nature of the strength field. Considering the crack trajectory as a diffusive process, the 'crack diffusion coefficient' is introduced. From fatigue crack propagation experiments on a set of identical SEN polyester composite specimens, the variance of the crack tip position along the loading axis is found to be a linear function of the effective 'time'. The latter is taken as the effective crack length. The coefficient of proportionality between variance of the crack trajectory and the effective crack length defines the crack diffusion coefficient D which is found in the present study to be 0.165 mm. This parameter reflects the ability of the composite to deviate the crack from the energetically most efficient path and thus links fracture toughness to the microstructure.

  19. Genetic factors explain half of all variance in serum eosinophil cationic protein

    DEFF Research Database (Denmark)

    Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie

    2014-01-01

    with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....

  20. Comparison of Absolute Apparent Diffusion Coefficient (ADC) Values in ADC Maps Generated Across Different Postprocessing Software: Reproducibility in Endometrial Carcinoma.

    Science.gov (United States)

    Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan

    2017-12-01

    Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.

  1. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  2. Coefficient of Variance as Quality Criterion for Evaluation of Advanced Hepatic Fibrosis Using 2D Shear-Wave Elastography.

    Science.gov (United States)

    Lim, Sanghyeok; Kim, Seung Hyun; Kim, Yongsoo; Cho, Young Seo; Kim, Tae Yeob; Jeong, Woo Kyoung; Sohn, Joo Hyun

    2018-02-01

    To compare the diagnostic performance for advanced hepatic fibrosis measured by 2D shear-wave elastography (SWE), using either the coefficient of variance (CV) or the interquartile range divided by the median value (IQR/M) as quality criteria. In this retrospective study, from January 2011 to December 2013, 96 patients, who underwent both liver stiffness measurement by 2D SWE and liver biopsy for hepatic fibrosis grading, were enrolled. The diagnostic performances of the CV and the IQR/M were analyzed using receiver operating characteristic curves with areas under the curves (AUCs) and were compared by Fisher's Z test, based on matching the cutoff points in an interactive dot diagram. All P values less than 0.05 were considered significant. When using the cutoff value IQR/M of 0.21, the matched cutoff point of CV was 20%. When a cutoff value of CV of 20% was used, the diagnostic performance for advanced hepatic fibrosis ( ≥ F3 grade) with CV of less than 20% was better than that in the group with CV greater than or equal to 20% (AUC 0.967 versus 0.786, z statistic = 2.23, P = .025), whereas when the matched cutoff value IQR/M of 0.21 showed no difference (AUC 0.918 versus 0.927, z statistic = -0.178, P = .859). The validity of liver stiffness measurements made by 2D SWE for assessing advanced hepatic fibrosis may be judged using CVs, and when the CV is less than 20% it can be considered "more reliable" than using IQR/M of less than 0.21. © 2017 by the American Institute of Ultrasound in Medicine.

  3. Towards molecular design using 2D-molecular contour maps obtained from PLS regression coefficients

    Science.gov (United States)

    Borges, Cleber N.; Barigye, Stephen J.; Freitas, Matheus P.

    2017-12-01

    The multivariate image analysis descriptors used in quantitative structure-activity relationships are direct representations of chemical structures as they are simply numerical decodifications of pixels forming the 2D chemical images. These MDs have found great utility in the modeling of diverse properties of organic molecules. Given the multicollinearity and high dimensionality of the data matrices generated with the MIA-QSAR approach, modeling techniques that involve the projection of the data space onto orthogonal components e.g. Partial Least Squares (PLS) have been generally used. However, the chemical interpretation of the PLS-based MIA-QSAR models, in terms of the structural moieties affecting the modeled bioactivity has not been straightforward. This work describes the 2D-contour maps based on the PLS regression coefficients, as a means of assessing the relevance of single MIA predictors to the response variable, and thus allowing for the structural, electronic and physicochemical interpretation of the MIA-QSAR models. A sample study to demonstrate the utility of the 2D-contour maps to design novel drug-like molecules is performed using a dataset of some anti-HIV-1 2-amino-6-arylsulfonylbenzonitriles and derivatives, and the inferences obtained are consistent with other reports in the literature. In addition, the different schemes for encoding atomic properties in molecules are discussed and evaluated.

  4. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  5. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  6. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    Science.gov (United States)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  7. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

  8. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  9. Temperature dependence of Kerr coefficient and quadratic polarized optical coefficient of a paraelectric Mn:Fe:KTN crystal

    Directory of Open Access Journals (Sweden)

    Qieni Lu

    2015-08-01

    Full Text Available We measure temperature dependence on Kerr coefficient and quadratic polarized optical coefficient of a paraelectric Mn:Fe:KTN crystal simultaneously in this work, based on digital holographic interferometry (DHI. And the spatial distribution of the field-induced refractive index change can also be visualized and estimated by numerically retrieving sequential phase maps of Mn:Fe:KTN crystal from recording digital holograms in different states. The refractive indices decrease with increasing temperature and quadratic polarized optical coefficient is insensitive to temperature. The experimental results suggest that the DHI method presented here is highly applicable in both visualizing the temporal and spatial behavior of the internal electric field and accurately measuring electro-optic coefficient for electrooptical media.

  10. A Whole-Tumor Histogram Analysis of Apparent Diffusion Coefficient Maps for Differentiating Thymic Carcinoma from Lymphoma.

    Science.gov (United States)

    Zhang, Wei; Zhou, Yue; Xu, Xiao-Quan; Kong, Ling-Yan; Xu, Hai; Yu, Tong-Fu; Shi, Hai-Bin; Feng, Qing

    2018-01-01

    To assess the performance of a whole-tumor histogram analysis of apparent diffusion coefficient (ADC) maps in differentiating thymic carcinoma from lymphoma, and compare it with that of a commonly used hot-spot region-of-interest (ROI)-based ADC measurement. Diffusion weighted imaging data of 15 patients with thymic carcinoma and 13 patients with lymphoma were retrospectively collected and processed with a mono-exponential model. ADC measurements were performed by using a histogram-based and hot-spot-ROI-based approach. In the histogram-based approach, the following parameters were generated: mean ADC (ADC mean ), median ADC (ADC median ), 10th and 90th percentile of ADC (ADC 10 and ADC 90 ), kurtosis, and skewness. The difference in ADCs between thymic carcinoma and lymphoma was compared using a t test. Receiver operating characteristic analyses were conducted to determine and compare the differentiating performance of ADCs. Lymphoma demonstrated significantly lower ADC mean , ADC median , ADC 10 , ADC 90 , and hot-spot-ROI-based mean ADC than those found in thymic carcinoma (all p values histogram analysis of ADC maps can improve the differentiating performance between thymic carcinoma and lymphoma.

  11. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  12. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  13. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  14. Estimation of the mean of a univariate normal distribution when the variance is not known

    NARCIS (Netherlands)

    Danilov, Dmitri

    2005-01-01

    We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case

  15. A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.

    Science.gov (United States)

    Li, Ling; Yao, Dezhong

    2007-01-01

    It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.

  16. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    Science.gov (United States)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  17. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  18. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  19. Histogram analysis of apparent diffusion coefficient maps for differentiating primary CNS lymphomas from tumefactive demyelinating lesions.

    Science.gov (United States)

    Lu, Shan Shan; Kim, Sang Joon; Kim, Namkug; Kim, Ho Sung; Choi, Choong Gon; Lim, Young Min

    2015-04-01

    This study intended to investigate the usefulness of histogram analysis of apparent diffusion coefficient (ADC) maps for discriminating primary CNS lymphomas (PCNSLs), especially atypical PCNSLs, from tumefactive demyelinating lesions (TDLs). Forty-seven patients with PCNSLs and 18 with TDLs were enrolled in our study. Hyperintense lesions seen on T2-weighted images were defined as ROIs after ADC maps were registered to the corresponding T2-weighted image. ADC histograms were calculated from the ROIs containing the entire lesion on every section and on a voxel-by-voxel basis. The ADC histogram parameters were compared among all PCNSLs and TDLs as well as between the subgroup of atypical PCNSLs and TDLs. ROC curves were constructed to evaluate the diagnostic performance of the histogram parameters and to determine the optimum thresholds. The differences between the PCNSLs and TDLs were found in the minimum ADC values (ADCmin) and in the 5th and 10th percentiles (ADC5% and ADC10%) of the cumulative ADC histograms. However, no statistical significance was found in the mean ADC value or in the ADC value concerning the mode, kurtosis, and skewness. The ADCmin, ADC5%, and ADC10% were also lower in atypical PCNSLs than in TDLs. ADCmin was the best indicator for discriminating atypical PCNSLs from TDLs, with a threshold of 556×10(-6) mm2/s (sensitivity, 81.3 %; specificity, 88.9%). Histogram analysis of ADC maps may help to discriminate PCNSLs from TDLs and may be particularly useful in differentiating atypical PCNSLs from TDLs.

  20. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  1. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

  2. The summation of the matrix elements of Hamiltonian and transition operators. The variance of the emission spectrum

    International Nuclear Information System (INIS)

    Karaziya, R.I.; Rudzikajte, L.S.

    1988-01-01

    The general method to obtain the explicit expressions for sums of the matrix elements of Hamiltonian and transition operators has been extended. It can be used for determining the main characteristics of atomic spectra, such as the mean energy, the variance, the asymmetry coefficient, etc., as well as for the average quantities which describe the configuration mixing. By mean of this method the formula for the variance of the emission spectrum has been derived. It has been shown that this quantity of the emission spectrum can be expressed by the variances of the energy spectra of the initial and final configurations and by additional terms, caused by the distribution of the intensity in spectrum

  3. Quantitative multi-parameter mapping of R1, PD*, MT and R2* at 3T: a multi-center validation

    Directory of Open Access Journals (Sweden)

    Nikolaus eWeiskopf

    2013-06-01

    Full Text Available Multi-center studies using magnetic resonance imaging facilitate studying small effect sizes, global population variance and rare diseases. The reliability and sensitivity of these multi-center studies crucially depend on the comparability of the data generated at different sites and time points. The level of inter-site comparability is still controversial for conventional anatomical T1-weighted MRI data. Quantitative multi-parameter mapping (MPM was designed to provide MR parameter measures that are comparable across sites and time points, i.e., 1mm high-resolution maps of the longitudinal relaxation rate (R1=1/T1, effective proton density (PD*, magnetization transfer saturation (MT and effective transverse relaxation rate (R2*=1/T2*. MPM was validated at 3T for use in multi-center studies by scanning five volunteers at three different sites. We determined the inter-site bias, inter-site and intra-site coefficient of variation (CoV for typical morphometric measures (i.e., gray matter probability maps used in voxel-based morphometry and the four quantitative parameters. The inter-site bias and CoV were smaller than 3.1% and 8%, respectively, except for the inter-site CoV of R2* (< 20%. The gray matter probability maps based on the MT parameter maps had a 14% higher inter-site reproducibility than maps based on conventional T1-weighted images. The low inter-site bias and variance in the parameters and derived gray matter probability maps confirm the high comparability of the quantitative maps across sites and time points. The reliability, short acquisition time, high resolution and the detailed insights into the brain microstructure provided by MPM makes it an efficient tool for multi-center imaging studies.

  4. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  5. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  6. Feasibility of similarity coefficient map for improving morphological evaluation of T2* weighted MRI for renal cancer

    International Nuclear Information System (INIS)

    Wang Hao-Yu; Bao Shang-Lian; Jiani Hu; Meng Li; Haacke, E. M.; Xie Yao-Qin; Chen Jie; Amy Yu; Wei Xin-Hua; Dai Yong-Ming

    2013-01-01

    The purpose of this paper is to investigate the feasibility of using a similarity coefficient map (SCM) in improving the morphological evaluation of T 2 * weighted (T 2 *W) magnatic resonance imaging (MRI) for renal cancer. Simulation studies and in vivo 12-echo T 2 *W experiments for renal cancers were performed for this purpose. The results of the first simulation study suggest that an SCM can reveal small structures which are hard to distinguish from the background tissue in T 2 *W images and the corresponding T 2 * map. The capability of improving the morphological evaluation is likely due to the improvement in the signal-to-noise ratio (SNR) and the carrier-to-noise ratio (CNR) by using the SCM technique. Compared with T 2 *W images, an SCM can improve the SNR by a factor ranging from 1.87 to 2.47. Compared with T 2 * maps, an SCM can improve the SNR by a factor ranging from 3.85 to 33.31. Compared with T 2 *W images, an SCM can improve the CNR by a factor ranging from 2.09 to 2.43. Compared with T 2 * maps, an SCM can improve the CNR by a factor ranging from 1.94 to 8.14. For a given noise level, the improvements of the SNR and the CNR depend mainly on the original SNRs and CNRs in T 2 *W images, respectively. In vivo experiments confirmed the results of the first simulation study. The results of the second simulation study suggest that more echoes are used to generate the SCM, and higher SNRs and CNRs can be achieved in SCMs. In conclusion, an SCM can provide improved morphological evaluation of T 2 *W MR images for renal cancer by unveiling fine structures which are ambiguous or invisible in the corresponding T 2 *W MR images and T 2 * maps. Furthermore, in practical applications, for a fixed total sampling time, one should increase the number of echoes as much as possible to achieve SCMs with better SNRs and CNRs

  7. LETTER TO THE EDITOR: Fractal diffusion coefficient from dynamical zeta functions

    Science.gov (United States)

    Cristadoro, Giampaolo

    2006-03-01

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero.

  8. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  9. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  10. Mapping grain boundary heterogeneity at the nanoscale in a positive temperature coefficient of resistivity ceramic

    Science.gov (United States)

    Holsgrove, Kristina M.; Kepaptsoglou, Demie M.; Douglas, Alan M.; Ramasse, Quentin M.; Prestat, Eric; Haigh, Sarah J.; Ward, Michael B.; Kumar, Amit; Gregg, J. Marty; Arredondo, Miryam

    2017-06-01

    Despite being of wide commercial use in devices, the orders of magnitude increase in resistance that can be seen in some semiconducting BaTiO3-based ceramics, on heating through the Curie temperature (TC), is far from well understood. Current understanding of the behavior hinges on the role of grain boundary resistance that can be modified by polarization discontinuities which develop in the ferroelectric state. However, direct nanoscale resistance mapping to verify this model has rarely been attempted, and the potential approach to engineer polarization states at the grain boundaries, that could lead to optimized positive temperature coefficient (PTC) behavior, is strongly underdeveloped. Here we present direct visualization and nanoscale mapping in a commercially optimized BaTiO3-PbTiO3-CaTiO3 PTC ceramic using Kelvin probe force microscopy, which shows that, even in the low resistance ferroelectric state, the potential drop at grain boundaries is significantly greater than in grain interiors. Aberration-corrected scanning transmission electron microscopy and electron energy loss spectroscopy reveal new evidence of Pb-rich grain boundaries symptomatic of a higher net polarization normal to the grain boundaries compared to the purer grain interiors. These results validate the critical link between optimized PTC performance and higher local polarization at grain boundaries in this specific ceramic system and suggest a novel route towards engineering devices where an interface layer of higher spontaneous polarization could lead to enhanced PTC functionality.

  11. Mapping grain boundary heterogeneity at the nanoscale in a positive temperature coefficient of resistivity ceramic

    Directory of Open Access Journals (Sweden)

    Kristina M. Holsgrove

    2017-06-01

    Full Text Available Despite being of wide commercial use in devices, the orders of magnitude increase in resistance that can be seen in some semiconducting BaTiO3-based ceramics, on heating through the Curie temperature (TC, is far from well understood. Current understanding of the behavior hinges on the role of grain boundary resistance that can be modified by polarization discontinuities which develop in the ferroelectric state. However, direct nanoscale resistance mapping to verify this model has rarely been attempted, and the potential approach to engineer polarization states at the grain boundaries, that could lead to optimized positive temperature coefficient (PTC behavior, is strongly underdeveloped. Here we present direct visualization and nanoscale mapping in a commercially optimized BaTiO3–PbTiO3–CaTiO3 PTC ceramic using Kelvin probe force microscopy, which shows that, even in the low resistance ferroelectric state, the potential drop at grain boundaries is significantly greater than in grain interiors. Aberration-corrected scanning transmission electron microscopy and electron energy loss spectroscopy reveal new evidence of Pb-rich grain boundaries symptomatic of a higher net polarization normal to the grain boundaries compared to the purer grain interiors. These results validate the critical link between optimized PTC performance and higher local polarization at grain boundaries in this specific ceramic system and suggest a novel route towards engineering devices where an interface layer of higher spontaneous polarization could lead to enhanced PTC functionality.

  12. Utility of histogram analysis of apparent diffusion coefficient maps obtained using 3.0T MRI for distinguishing uterine carcinosarcoma from endometrial carcinoma.

    Science.gov (United States)

    Takahashi, Masahiro; Kozawa, Eito; Tanisaka, Megumi; Hasegawa, Kousei; Yasuda, Masanori; Sakai, Fumikazu

    2016-06-01

    We explored the role of histogram analysis of apparent diffusion coefficient (ADC) maps for discriminating uterine carcinosarcoma and endometrial carcinoma. We retrospectively evaluated findings in 13 patients with uterine carcinosarcoma and 50 patients with endometrial carcinoma who underwent diffusion-weighted imaging (b = 0, 500, 1000 s/mm(2) ) at 3T with acquisition of corresponding ADC maps. We derived histogram data from regions of interest drawn on all slices of the ADC maps in which tumor was visualized, excluding areas of necrosis and hemorrhage in the tumor. We used the Mann-Whitney test to evaluate the capacity of histogram parameters (mean ADC value, 5th to 95th percentiles, skewness, kurtosis) to discriminate uterine carcinosarcoma and endometrial carcinoma and analyzed the receiver operating characteristic (ROC) curve to determine the optimum threshold value for each parameter and its corresponding sensitivity and specificity. Carcinosarcomas demonstrated significantly higher mean vales of ADC, 95th, 90th, 75th, 50th, 25th percentiles and kurtosis than endometrial carcinomas (P Histogram analysis of ADC maps might be helpful for discriminating uterine carcinosarcomas and endometrial carcinomas. J. Magn. Reson. Imaging 2016;43:1301-1307. © 2015 Wiley Periodicals, Inc.

  13. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  14. Non-linear Bayesian update of PCE coefficients

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).

  15. Non-linear Bayesian update of PCE coefficients

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.; Pojonk, Oliver; Rosic, Bojana V.; Zander, Elmar

    2014-01-01

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).

  16. Correlation of spatial climate/weather maps and the advantages of using the Mahalanobis metric in predictions

    Science.gov (United States)

    Stephenson, D. B.

    1997-10-01

    The skill in predicting spatially varying weather/climate maps depends on the definition of the measure of similarity between the maps. Under the justifiable approximation that the anomaly maps are distributed multinormally, it is shown analytically that the choice of weighting metric, used in defining the anomaly correlation between spatial maps, can change the resulting probability distribution of the correlation coefficient. The estimate of the numbers of degrees of freedom based on the variance of the correlation distribution can vary from unity up to the number of grid points depending on the choice of weighting metric. The (pseudo-) inverse of the sample covariance matrix acts as a special choice for the metric in that it gives a correlation distribution which has minimal kurtosis and maximum dimension. Minimal kurtosis suggests that the average predictive skill might be improved due to the rarer occurrence of troublesome outlier patterns far from the mean state. Maximum dimension has a disadvantage for analogue prediction schemes in that it gives the minimum number of analogue states. This metric also has an advantage in that it allows one to powerfully test the null hypothesis of multinormality by examining the second and third moments of the correlation coefficient which were introduced by Mardia as invariant measures of multivariate kurtosis and skewness. For these reasons, it is suggested that this metric could be usefully employed in the prediction of weather/climate and in fingerprinting anthropogenic climate change. The ideas are illustrated using the bivariate example of the observed monthly mean sea-level pressures at Darwin and Tahitifrom 1866 1995.

  17. Reduced Variance using ADVANTG in Monte Carlo Calculations of Dose Coefficients to Stylized Phantoms

    Science.gov (United States)

    Hiller, Mauritius; Bellamy, Michael; Eckerman, Keith; Hertel, Nolan

    2017-09-01

    The estimation of dose coefficients of external radiation sources to the organs in phantoms becomes increasingly difficult for lower photon source energies. This study focus on the estimation of photon emitters around the phantom. The computer time needed to calculate a result within a certain precision can be lowered by several orders of magnitude using ADVANTG compared to a standard run. Using ADVANTG which employs the DENOVO adjoint calculation package enables the user to create a fully populated set of weight windows and source biasing instructions for an MCNP calculation.

  18. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  19. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, S; Lange, T

    2013-01-01

    The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

  20. Is fMRI “noise” really noise? Resting state nuisance regressors remove variance with network structure

    Science.gov (United States)

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured “signal” as well as “noise.” Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. PMID:25862264

  1. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    Science.gov (United States)

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  2. Methodology update for determination of the erosion coefficient(Z

    Directory of Open Access Journals (Sweden)

    Tošić Radislav

    2012-01-01

    Full Text Available The research and mapping the intensity of mechanical water erosion that have begun with the empirical methodology of S. Gavrilović during the mid-twentieth century last, by various intensity, until the present time. A many decades work on the research of these issues pointed to some shortcomings of the existing methodology, and thus the need for its innovation. In this sense, R. Lazarević made certain adjustments of the empirical methodology of S. Gavrilović by changing the tables for determination of the coefficients Φ, X and Y, that is, the tables for determining the mean erosion coefficient (Z. The main objective of this paper is to update the existing methodology for determining the erosion coefficient (Z with the empirical methodology of S. Gavrilović and amendments made by R. Lazarević (1985, but also with better adjustments to the information technologies and the needs of modern society. The proposed procedure, that is, the model to determine the erosion coefficient (Z in this paper is the result of ten years of scientific research and project work in mapping the intensity of mechanical water erosion and its modeling using various models of erosion in the Republic of Srpska and Serbia. By analyzing the correlation of results obtained by regression models and results obtained during the mapping of erosion on the territory of the Republic of Srpska, a high degree of correlation (R² = 0.9963 was established, which is essentially a good assessment of the proposed models.

  3. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  4. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  5. Stochasticity in the Josephson map

    International Nuclear Information System (INIS)

    Nomura, Y.; Ichikawa, Y.H.; Filippov, A.T.

    1996-04-01

    The Josephson map describes nonlinear dynamics of systems characterized by standard map with the uniform external bias superposed. The intricate structures of the phase space portrait of the Josephson map are examined on the basis of the tangent map associated with the Josephson map. Numerical observation of the stochastic diffusion in the Josephson map is examined in comparison with the renormalized diffusion coefficient calculated by the method of characteristic function. The global stochasticity of the Josephson map occurs at the values of far smaller stochastic parameter than the case of the standard map. (author)

  6. A combinatorial interpretation of the $κ^{\\star}_{g}(n)$ coefficients

    DEFF Research Database (Denmark)

    Li, Thomas Jiaxian; M. Reidys, Christian

    2014-01-01

    Studying the virtual Euler characteristic of the moduli space of curves, Harer and Zagier compute the generating function $C_g(z)$ of unicellular maps of genus $g$. They furthermore identify coefficients, $\\kappa^{\\star}_{g}(n)$, which fully determine the series $C_g(z)$. The main result of this ......Studying the virtual Euler characteristic of the moduli space of curves, Harer and Zagier compute the generating function $C_g(z)$ of unicellular maps of genus $g$. They furthermore identify coefficients, $\\kappa^{\\star}_{g}(n)$, which fully determine the series $C_g(z)$. The main result...... of this paper is a combinatorial interpretation of $\\kappa^{\\star}_{g}(n)$. We show that these enumerate a class of unicellular maps, which correspond $1$-to-$2^{2g}$ to a specific type of trees, referred to as O-trees. We furthermore prove a two term recursion for $\\kappa^{\\star}_{g}(n)$ and that for any fixed...

  7. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  8. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    Science.gov (United States)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  9. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  10. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  11. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  12. Modeling Attitude Variance in Small UAS’s for Acoustic Signature Simplification Using Experimental Design in a Hardware-in-the-Loop Simulation

    Science.gov (United States)

    2015-03-26

    response. Additionally, choosing correlated levels for multiple factors results in multicollinearity which can cause problems such as model...misspecification or large variances and covariances for the regression coefficients. A good way to avoid multicollinearity is to use orthogonal, factorial

  13. Use of variance techniques to measure dry air-surface exchange rates

    Science.gov (United States)

    Wesely, M. L.

    1988-07-01

    The variances of fluctuations of scalar quantities can be measured and interpreted to yield indirect estimates of their vertical fluxes in the atmospheric surface layer. Strong correlations among scalar fluctuations indicate a similarity of transfer mechanisms, which is utilized in some of the variance techniques. The ratios of the standard deviations of two scalar quantities, for example, can be used to estimate the flux of one if the flux of the other is measured, without knowledge of atmospheric stability. This is akin to a modified Bowen ratio approach. Other methods such as the normalized standard-deviation technique and the correlation-coefficient technique can be utilized effectively if atmospheric stability is evaluated and certain semi-empirical functions are known. In these cases, iterative calculations involving measured variances of fluctuations of temperature and vertical wind velocity can be used in place of direct flux measurements. For a chemical sensor whose output is contaminated by non-atmospheric noise, covariances with fluctuations of scalar quantities measured with a very good signal-to-noise ratio can be used to extract the needed standard deviation. Field measurements have shown that many of these approaches are successful for gases such as ozone and sulfur dioxide, as well as for temperature and water vapor, and could be extended to other trace substances. In humid areas, it appears that water vapor fluctuations often have a higher degree of correlation to fluctuations of other trace gases than do temperature fluctuations; this makes water vapor a more reliable companion or “reference” scalar. These techniques provide some reliable research approaches but, for routine or operational measurement, they are limited by the need for fast-response sensors. Also, all variance approaches require some independent means to estimate the direction of the flux.

  14. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  15. Variability of apparently homogeneous soilscapes in São Paulo state, Brazil: II. quality of soil maps

    Directory of Open Access Journals (Sweden)

    M. van Den Berg

    2000-06-01

    Full Text Available The quality of semi-detailed (scale 1:100.000 soil maps and the utility of a taxonomically based legend were assessed by studying 33 apparently homogeneous fields with strongly weathered soils in two regions in São Paulo State: Araras and Assis. An independent data set of 395 auger sites was used to determine purity of soil mapping units and analysis of variance within and between mapping units and soil classification units. Twenty three soil profiles were studied in detail. The studied soil maps have a high purity for some legend criteria, such as B horizon type (> 90% and soil texture class (> 80%. The purity for the "trophic character" (eutrophic, dystrophic, allic was only 55% in Assis. It was 88% in Araras, where many soil units had been mapped as associations. In both regions, the base status of clay-textured soils was generally better than suggested by the maps. Analysis of variance showed that mapping was successful for "durable" soil characteristics such as clay content (> 80% of variance explained and cation exchange capacity (≥ 50% of variance explained of 0-20 and 60-80 cm layers. For soil characteristics that are easily modified by management, such as base saturation of the 0-20 cm layer, the maps had explained very little ( 100 m; (b taking advantage of correlations between easily measured soil characteristics and chemical soil properties and, (c unbending the link between legend criteria and a taxonomic system. The maps are well suited to obtain an impression of land suitability for high-input farming. Additional field work and data on former land use/management are necessary for the evaluation of chemical properties of surface horizons.

  16. Application of Remote Sensing in Geological Mapping, Case Study al Maghrabah Area - Hajjah Region, Yemen

    Science.gov (United States)

    Al-Nahmi, F.; Saddiqi, O.; Hilali, A.; Rhinane, H.; Baidder, L.; El arabi, H.; Khanbari, K.

    2017-11-01

    Remote sensing technology plays an important role today in the geological survey, mapping, analysis and interpretation, which provides a unique opportunity to investigate the geological characteristics of the remote areas of the earth's surface without the need to gain access to an area on the ground. The aim of this study is achievement a geological map of the study area. The data utilizes is Sentinel-2 imagery, the processes used in this study, the OIF Optimum Index Factor is a statistic value that can be used to select the optimum combination of three bands in a satellite image. It's based on the total variance within bands and correlation coefficient between bands, ICA Independent component analysis (3, 4, 6) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals, MNF Minimum Noise Fraction (1, 2, 3) is used to determine the inherent dimensionality of image data to segregate noise in the data and to reduce the computational requirements for subsequent processing, Optimum Index Factor is a good method for choosing the best band for lithological mapping. ICA, MNF, also a practical way to extract the structural geology maps. The results in this paper indicate that, the studied area can be divided into four main geological units: Basement rocks (Meta volcanic, Meta sediments), Sedimentary rocks, Intrusive rocks, volcanic rocks. The method used in this study offers great potential for lithological mapping, by using Sentinel-2 imagery, the results were compared with existing geologic maps and were superior and could be used to update the existing maps.

  17. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  18. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  19. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  20. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  1. The influence of mean climate trends and climate variance on beaver survival and recruitment dynamics.

    Science.gov (United States)

    Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank

    2012-09-01

    Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for

  2. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  4. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  5. Online Estimation of ARW Coefficient of Fiber Optic Gyro

    Directory of Open Access Journals (Sweden)

    Yang Li

    2014-01-01

    Full Text Available As a standard method for noise analysis of fiber optic gyro (FOG, Allan variance has too large offline computational burden and data storages to be applied to online estimation. To overcome the barriers, the state space model is firstly established for FOG. Then the Sage-husa adaptive Kalman filter (SHAKF is introduced in this field. Through recursive calculation of measurement noise covariance matrix, SHAKF can avoid the storage of large amounts of history data. However, the precision and stability of this method are still the primary matters that needed to be addressed. Based on this point, a new online method for estimation of the coefficient of angular random walk is proposed. In the method, estimator of measurement noise is constructed by the recursive form of Allan variance at the shortest sampling time. Then the estimator is embedded into the SHAKF framework resulting in a new adaptive filter. The estimations of measurement noise variance and Kalman filter are independent of each other in this method. Therefore, it can address the problem of filtering divergence and precision degrading effectively. Test results of both digital simulation and experimental data of FOG verify the validity and feasibility of the proposed method.

  6. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  7. Texture analysis on the fluence map to evaluate the degree of modulation for volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Park, So-Yeon; Kim, Il Han; Ye, Sung-Joon; Carlson, Joel

    2014-01-01

    Purpose: Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. Methods: A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantry angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman’s rank correlation coefficients (r s ) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MI SPORT ) were calculated, and their correlations were analyzed in the same way. Results: There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The r s values

  8. Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata

    Directory of Open Access Journals (Sweden)

    Robert Tang Herman

    2012-05-01

    Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.

  9. Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.

    Science.gov (United States)

    Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J

    2008-06-18

    Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson

  10. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  11. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  12. Heritability and variance components of some morphological and agronomic in alfalfa

    International Nuclear Information System (INIS)

    Ates, E.; Tekeli, S.

    2005-01-01

    Four alfalfa cultivars were investigated using randomized complete-block design with three replications. Variance components, variance coefficients and heritability values of some morphological characters, herbage yield, dry matter yield and seed yield were determined. Maximum main stem height (78.69 cm), main stem diameter (4.85 mm), leaflet width (0.93 cm), seeds/pod (6.57), herbage yield (75.64 t ha/sub -1/), dry matter yield (20.06 t ha/sub -1/) and seed yield (0.49 t ha/sub -1/) were obtained from cv. Marina. Leaflet length varied from 1.65 to 2.08 cm. The raceme length measured 3.15 to 4.38 cm in alfalfa cultivars. The highest 1000-seeds weight values (2.42-2.49 g) were found from Marina and Sitel cultivars. Heritability values of various traits were: 91.0% for main stem height, 97.6% for main stem diameter, 81.8% for leaflet length, 88.8% for leaflet width, 90.4% for leaf/stem ratio, 28.3% for racemes/main stem, 99.0% for raceme length, 99.2% for seeds/pod, 88.0% for 1000-seeds weight, 97.2% for herbage yield, 99.6% for dry matter yield and 95.4% for seed yield. (author)

  13. Hydrodynamic Coefficients Identification and Experimental Investigation for an Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Shaorong XIE

    2014-02-01

    Full Text Available Hydrodynamic coefficients are the foundation of unmanned underwater vehicles modeling and controller design. In order to reduce identification complexity and acquire necessary hydrodynamic coefficients for controllers design, the motion of the unmanned underwater vehicle was separated into vertical motion and horizontal motion models. Hydrodynamic coefficients were regarded as mapping parameters from input forces and moments to output velocities and acceleration of the unmanned underwater vehicle. The motion models of the unmanned underwater vehicle were nonlinear and Genetic Algorithm was adopted to identify those hydrodynamic coefficients. To verify the identification quality, velocities and acceleration of the unmanned underwater vehicle was measured using inertial sensor under the same conditions as Genetic Algorithm identification. Curves similarity between measured velocities and acceleration and those identified by Genetic Algorithm were used as optimizing standard. It is found that the curves similarity were high and identified hydrodynamic coefficients of the unmanned underwater vehicle satisfied the measured motion states well.

  14. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  15. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  16. A comparison of two indices for the intraclass correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  17. The link between response time and preference, variance and processing heterogeneity in stated choice experiments

    DEFF Research Database (Denmark)

    Campbell, Danny; Mørkbak, Morten Raun; Olsen, Søren Bøye

    2018-01-01

    In this article we utilize the time respondents require to answer a self-administered online stated preference survey. While the effects of response time have been previously explored, this article proposes a different approach that explicitly recognizes the highly equivocal relationship between ...... between response time and utility coefficients, error variance and processing strategies. Our results thus emphasize the importance of considering response time when modeling stated choice data....... response time and respondents' choices. In particular, we attempt to disentangle preference, variance and processing heterogeneity and explore whether response time helps to explain these three types of heterogeneity. For this, we divide the data (ordered by response time) into approximately equal......-sized subsets, and then derive different class membership probabilities for each subset. We estimate a large number of candidate models and subsequently conduct a frequentist-based model averaging approach using information criteria to derive weights of evidence for each model. Our findings show a clear link...

  18. A pattern recognition approach to transistor array parameter variance

    Science.gov (United States)

    da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.

    2018-06-01

    The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.

  19. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    Science.gov (United States)

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  20. The genetic variance of resistance in M3 lines of rice against leaf blight disease

    International Nuclear Information System (INIS)

    Mugiono

    1979-01-01

    Seeds of Pelita I/1 rice variety were irradiated with 20, 30, 40 and 50 krad of gamma rays from a 60 Co source. Plants of M 3 lines were inoculated with bacterial leaf blight, Xanthomonas oryzae (Uzeda and Ishiyama) Downson, using clipping method. The coefficient of genetic variability of resistance against leaf blight disease increased with increasing dose. Highly significant difference in the genetic variance of resistance were found between the treated samples and the control. Dose of 20 krad gave good probability for selection of plants resistant against leaf blight disease. (author)

  1. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. Fixed Point Theory for Lipschitzian-type Mappings with Applications

    CERN Document Server

    Sahu, D R; Agarwal, Ravi P

    2009-01-01

    Offers a systematic presentation of Lipschitzian-type mappings in metric and Banach spaces. This book covers some basic properties of metric and Banach spaces. It also provides background in terms of convexity, smoothness and geometric coefficients of Banach spaces including duality mappings and metric projection mappings.

  4. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  5. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  6. A bijection for tri-cellular maps

    DEFF Research Database (Denmark)

    Han, Hillary Siwei; Reidys, Christian

    2013-01-01

    In this paper we give a bijective proof for a relation between unicellular, bicellular and tricellular maps. These maps represent cell-complexes of orientable surfaces having one, two or three boundary components. The relation can formally be obtained using matrix theory \\cite{Dyson} employing...... the Schwinger-Dyson equation \\cite{Schwinger}. In this paper we present a bijective proof of the corresponding coefficient equation. Our result is a bijection that transforms a unicellular map of genus $g$ into unicellular, bicellular or tricellular maps of strictly lower genera. The bijection employs edge...

  7. Voice Disorder Classification Based on Multitaper Mel Frequency Cepstral Coefficients Features

    Directory of Open Access Journals (Sweden)

    Ömer Eskidere

    2015-01-01

    Full Text Available The Mel Frequency Cepstral Coefficients (MFCCs are widely used in order to extract essential information from a voice signal and became a popular feature extractor used in audio processing. However, MFCC features are usually calculated from a single window (taper characterized by large variance. This study shows investigations on reducing variance for the classification of two different voice qualities (normal voice and disordered voice using multitaper MFCC features. We also compare their performance by newly proposed windowing techniques and conventional single-taper technique. The results demonstrate that adapted weighted Thomson multitaper method could distinguish between normal voice and disordered voice better than the results done by the conventional single-taper (Hamming window technique and two newly proposed windowing methods. The multitaper MFCC features may be helpful in identifying voices at risk for a real pathology that has to be proven later.

  8. Texture analysis on the fluence map to evaluate the degree of modulation for volumetric modulated arc therapy

    Energy Technology Data Exchange (ETDEWEB)

    Park, So-Yeon [Department of Radiation Oncology, Seoul National University Hospital, Seoul 110-744 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 110-744 (Korea, Republic of); Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744 (Korea, Republic of); Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul 110-799 (Korea, Republic of); Kim, Il Han [Department of Radiation Oncology, Seoul National University Hospital, Seoul 110-744 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 110-744 (Korea, Republic of); Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University College of Medicine, Seoul 110-744 (Korea, Republic of); Ye, Sung-Joon [Department of Radiation Oncology, Seoul National University Hospital, Seoul 110-744, (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 110-744 (Korea, Republic of); Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744 (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Seoul National University Graduate School of Convergence Science and Technology, Suwon 433-270 (Korea, Republic of); Carlson, Joel [Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744 (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Seoul National University Graduate School of Convergence Science and Technology, Suwon 433-270 (Korea, Republic of); and others

    2014-11-01

    Purpose: Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. Methods: A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantry angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman’s rank correlation coefficients (r{sub s}) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MI{sub SPORT}) were calculated, and their correlations were analyzed in the same way. Results: There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The r

  9. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  10. Generating Importance Map for Geometry Splitting using Discrete Ordinates Code in Deep Shielding Problem

    International Nuclear Information System (INIS)

    Kim, Jong Woon; Lee, Young Ouk

    2016-01-01

    When we use MCNP code for a deep shielding problem, we prefer to use variance reduction technique such as geometry splitting, or weight window, or source biasing to have relative error within reliable confidence interval. To generate importance map for geometry splitting in MCNP calculation, we should know the track entering number and previous importance on each cells since a new importance is calculated based on these information. If a problem is deep shielding problem such that we have zero tracks entering on a cell, we cannot generate new importance map. In this case, discrete ordinates code can provide information to generate importance map easily. In this paper, we use AETIUS code as a discrete ordinates code. Importance map for MCNP is generated based on a zone average flux of AETIUS calculation. The discretization of space, angle, and energy is not necessary for MCNP calculation. This is the big merit of MCNP code compared to the deterministic code. However, deterministic code (i.e., AETIUS) can provide a rough estimate of the flux throughout a problem relatively quickly. This can help MCNP by providing variance reduction parameters. Recently, ADVANTG code is released. This is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 v1.60

  11. High resolution optical DNA mapping

    Science.gov (United States)

    Baday, Murat

    Many types of diseases including cancer and autism are associated with copy-number variations in the genome. Most of these variations could not be identified with existing sequencing and optical DNA mapping methods. We have developed Multi-color Super-resolution technique, with potential for high throughput and low cost, which can allow us to recognize more of these variations. Our technique has made 10--fold improvement in the resolution of optical DNA mapping. Using a 180 kb BAC clone as a model system, we resolved dense patterns from 108 fluorescent labels of two different colors representing two different sequence-motifs. Overall, a detailed DNA map with 100 bp resolution was achieved, which has the potential to reveal detailed information about genetic variance and to facilitate medical diagnosis of genetic disease.

  12. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  13. On a numerical method for solving integro-differential equations with variable coefficients with applications in finance

    Science.gov (United States)

    Kudryavtsev, O.; Rodochenko, V.

    2018-03-01

    We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.

  14. Mean-Variance Portfolio Selection with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2013-01-01

    Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.

  15. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  16. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  17. RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.

    Science.gov (United States)

    Glaab, Enrico; Schneider, Reinhard

    2015-07-01

    High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  18. Quantitative and Qualitative Responses to Topical Cold in Healthy Caucasians Show Variance between Individuals but High Test-Retest Reliability.

    Directory of Open Access Journals (Sweden)

    Penny Moss

    Full Text Available Increased sensitivity to cold may be a predictor of persistent pain, but cold pain threshold is often viewed as unreliable. This study aimed to determine the within-subject reliability and between-subject variance of cold response, measured comprehensively as cold pain threshold plus pain intensity and sensation quality at threshold. A test-retest design was used over three sessions, one day apart. Response to cold was assessed at four sites (thenar eminence, volar forearm, tibialis anterior, plantar foot. Cold pain threshold was measured using a Medoc thermode and standard method of limits. Intensity of pain at threshold was rated using a 10cm visual analogue scale. Quality of sensation at threshold was quantified with indices calculated from subjects' selection of descriptors from a standard McGill Pain Questionnaire. Within-subject reliability for each measure was calculated with intra-class correlation coefficients and between-subject variance was evaluated as group coefficient of variation percentage (CV%. Gender and site comparisons were also made. Forty-five healthy adults participated: 20 male, 25 female; mean age 29 (range 18-56 years. All measures at all four test sites showed high within-subject reliability: cold pain thresholds r = 0.92-0.95; pain rating r = 0.93-0.97; McGill pain quality indices r = 0.87-0.85. In contrast, all measures showed wide between-subject variance (CV% between 51.4% and 92.5%. Upper limb sites were consistently more sensitive than lower limb sites, but equally reliable. Females showed elevated cold pain thresholds, although similar pain intensity and quality to males. Females were also more reliable and showed lower variance for all measures. Thus, although there was clear population variation, response to cold for healthy individuals was found to be highly reliable, whether measured as pain threshold, pain intensity or sensation quality. A comprehensive approach to cold response testing therefore may add

  19. Quantitative and Qualitative Responses to Topical Cold in Healthy Caucasians Show Variance between Individuals but High Test-Retest Reliability.

    Science.gov (United States)

    Moss, Penny; Whitnell, Jasmine; Wright, Anthony

    2016-01-01

    Increased sensitivity to cold may be a predictor of persistent pain, but cold pain threshold is often viewed as unreliable. This study aimed to determine the within-subject reliability and between-subject variance of cold response, measured comprehensively as cold pain threshold plus pain intensity and sensation quality at threshold. A test-retest design was used over three sessions, one day apart. Response to cold was assessed at four sites (thenar eminence, volar forearm, tibialis anterior, plantar foot). Cold pain threshold was measured using a Medoc thermode and standard method of limits. Intensity of pain at threshold was rated using a 10cm visual analogue scale. Quality of sensation at threshold was quantified with indices calculated from subjects' selection of descriptors from a standard McGill Pain Questionnaire. Within-subject reliability for each measure was calculated with intra-class correlation coefficients and between-subject variance was evaluated as group coefficient of variation percentage (CV%). Gender and site comparisons were also made. Forty-five healthy adults participated: 20 male, 25 female; mean age 29 (range 18-56) years. All measures at all four test sites showed high within-subject reliability: cold pain thresholds r = 0.92-0.95; pain rating r = 0.93-0.97; McGill pain quality indices r = 0.87-0.85. In contrast, all measures showed wide between-subject variance (CV% between 51.4% and 92.5%). Upper limb sites were consistently more sensitive than lower limb sites, but equally reliable. Females showed elevated cold pain thresholds, although similar pain intensity and quality to males. Females were also more reliable and showed lower variance for all measures. Thus, although there was clear population variation, response to cold for healthy individuals was found to be highly reliable, whether measured as pain threshold, pain intensity or sensation quality. A comprehensive approach to cold response testing therefore may add validity and

  20. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  1. Fracture mapping at the Spent Fuel Test-Climax

    International Nuclear Information System (INIS)

    Wilder, D.G.; Yow, J.L. Jr.

    1981-05-01

    Mapping of geologic discontinuities has been done in several phases at the Spent Fuel Test-Climax (SFT-C) in the granitic Climax stock at the Nevada Test Site. Mapping was carried out in the tail drift, access drift, canister drift, heater drifts, instrumentation alcove, and receiving room. The fractures mapped as intersecting a horizontal datum in the canister and heater drifts are shown on one figure. Fracture sketch maps have been compiled as additional figures. Geologic mapping efforts were scheduled around and significantly impacted by the excavation and construction schedules. Several people were involved in the mapping, and over 2500 geologic discontinuities were mapped, including joints, shears, and faults. Some variance between individuals' mapping efforts was noticed, and the effects of various magnetic influences upon a compass were examined. The examination of compass errors improved the credibility of the data. The compass analysis work is explained in Appendix A. Analysis of the fracture data will be presented in a future report

  2. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  3. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  4. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  5. The mean–variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI

    Science.gov (United States)

    Thompson, William H.; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed. PMID:26236216

  6. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  7. Patterns of accentuated grey-white differentiation on diffusion-weighted imaging or the apparent diffusion coefficient maps in comatose survivors after global brain injury

    International Nuclear Information System (INIS)

    Kim, E.; Sohn, C.-H.; Chang, K.-H.; Chang, H.-W.; Lee, D.H.

    2011-01-01

    Aim: To determine what disease entities show accentuated grey-white differentiation of the cerebral hemisphere on diffusion-weighted images (DWI) or apparent diffusion coefficient (ADC) maps, and whether there is a correlation between the different patterns and the cause of the brain injury. Methods and materials: The DWI and ADC maps of 19 patients with global brain injury were reviewed and evaluated to investigate whether there was a correlation between the different patterns seen on the DWI and ADC maps and the cause of global brain injury. The ADC values were measured for quantitative analysis. Results: There were three different patterns of ADC decrease: a predominant ADC decrease in only the cerebral cortex (n = 8; pattern I); an ADC decrease in both the cerebral cortex and white matter (WM) and a predominant decrease in the WM (n = 9; pattern II); and a predominant ADC decrease in only the WM (n = 3; pattern III). Conclusion: Pattern I is cerebral cortical injury, suggesting cortical laminar necrosis in hypoxic brain injury. Pattern II is cerebral cortical and WM injury, frequently seen in brain death, while pattern 3 is mainly WM injury, especially found in hypoglycaemic brain injury. It is likely that pattern I is decorticate injury and pattern II is decerebrate injury in hypoxic ischaemic encephalopathy.Patterns I and II are found in severe hypoxic brain injury, and pattern II is frequently shown in brain death, whereas pattern III was found in severe hypoglycaemic injury.

  8. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  9. PET attenuation coefficients from CT images: experimental evaluation of the transformation of CT into PET 511-keV attenuation coefficients.

    Science.gov (United States)

    Burger, C; Goerres, G; Schoenes, S; Buck, A; Lonn, A H R; Von Schulthess, G K

    2002-07-01

    The CT data acquired in combined PET/CT studies provide a fast and essentially noiseless source for the correction of photon attenuation in PET emission data. To this end, the CT values relating to attenuation of photons in the range of 40-140 keV must be transformed into linear attenuation coefficients at the PET energy of 511 keV. As attenuation depends on photon energy and the absorbing material, an accurate theoretical relation cannot be devised. The transformation implemented in the Discovery LS PET/CT scanner (GE Medical Systems, Milwaukee, Wis.) uses a bilinear function based on the attenuation of water and cortical bone at the CT and PET energies. The purpose of this study was to compare this transformation with experimental CT values and corresponding PET attenuation coefficients. In 14 patients, quantitative PET attenuation maps were calculated from germanium-68 transmission scans, and resolution-matched CT images were generated. A total of 114 volumes of interest were defined and the average PET attenuation coefficients and CT values measured. From the CT values the predicted PET attenuation coefficients were calculated using the bilinear transformation. When the transformation was based on the narrow-beam attenuation coefficient of water at 511 keV (0.096 cm(-1)), the predicted attenuation coefficients were higher in soft tissue than the measured values. This bias was reduced by replacing 0.096 cm(-1) in the transformation by the linear attenuation coefficient of 0.093 cm(-1) obtained from germanium-68 transmission scans. An analysis of the corrected emission activities shows that the resulting transformation is essentially equivalent to the transmission-based attenuation correction for human tissue. For non-human material, however, it may assign inaccurate attenuation coefficients which will also affect the correction in neighbouring tissue.

  10. Male-biased recombination in odonates: insights from a linkage map ...

    Indian Academy of Sciences (India)

    2013-04-05

    Apr 5, 2013 ... Male-biased recombination in odonates: insights from a linkage map of the damselfly ... particular, odonates are emerging model systems for biotic effects of .... sex with highest variance in reproductive success (Trivers. 1988).

  11. Prognostic value of diffusion-weighted imaging summation scores or apparent diffusion coefficient maps in newborns with hypoxic-ischemic encephalopathy

    International Nuclear Information System (INIS)

    Cavalleri, Francesca; Todeschini, Alessandra; Lugli, Licia; Pugliese, Marisa; Della Casa, Elisa; Gallo, Claudio; Frassoldati, Rossella; Ferrari, Fabrizio; D'Amico, Roberto

    2014-01-01

    The diagnostic and prognostic assessment of newborn infants with hypoxic-ischemic encephalopathy (HIE) comprises, among other tools, diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) maps. To compare the ability of DWI and ADC maps in newborns with HIE to predict the neurodevelopmental outcome at 2 years of age. Thirty-four term newborns with HIE admitted to the Neonatal Intensive Care Unit of Modena University Hospital from 2004 to 2008 were consecutively enrolled in the study. All newborns received EEG, conventional MRI and DWI within the first week of life. DWI was analyzed by means of summation (S) score and regional ADC measurements. Neurodevelopmental outcome was assessed with a standard 1-4 scale and the Griffiths Mental Developmental Scales - Revised (GMDS-R). When the outcome was evaluated with a standard 1-4 scale, the DWI S scores showed very high area under the curve (AUC) (0.89) whereas regional ADC measurements in specific subregions had relatively modest predictive value. The lentiform nucleus was the region with the highest AUC (0.78). When GMDS-R were considered, DWI S scores were good to excellent predictors for some GMDS-R subscales. The predictive value of ADC measurements was both region- and subscale-specific. In particular, ADC measurements in some regions (basal ganglia, white matter or rolandic cortex) were excellent predictors for specific GMDS-R with AUCs up to 0.93. DWI S scores showed the highest prognostic value for the neurological outcome at 2 years of age. Regional ADC measurements in specific subregions proved to be highly prognostic for specific neurodevelopmental outcomes. (orig.)

  12. Prognostic value of diffusion-weighted imaging summation scores or apparent diffusion coefficient maps in newborns with hypoxic-ischemic encephalopathy

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, Francesca; Todeschini, Alessandra [Azienda Unita Sanitaria Locale di Modena, Neuroradiology Unit, Department of Neuroscience, Nuovo Ospedale Civile S. Agostino Estense di Modena, Modena (Italy); Lugli, Licia; Pugliese, Marisa; Della Casa, Elisa; Gallo, Claudio; Frassoldati, Rossella; Ferrari, Fabrizio [Modena University Hospital, Institute of Pediatrics and Neonatal Medicine and NICU, Modena (Italy); D' Amico, Roberto [University of Modena and Reggio Emilia, Department of Clinical and Diagnostic Medicine and Public Health, Modena (Italy)

    2014-09-15

    The diagnostic and prognostic assessment of newborn infants with hypoxic-ischemic encephalopathy (HIE) comprises, among other tools, diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) maps. To compare the ability of DWI and ADC maps in newborns with HIE to predict the neurodevelopmental outcome at 2 years of age. Thirty-four term newborns with HIE admitted to the Neonatal Intensive Care Unit of Modena University Hospital from 2004 to 2008 were consecutively enrolled in the study. All newborns received EEG, conventional MRI and DWI within the first week of life. DWI was analyzed by means of summation (S) score and regional ADC measurements. Neurodevelopmental outcome was assessed with a standard 1-4 scale and the Griffiths Mental Developmental Scales - Revised (GMDS-R). When the outcome was evaluated with a standard 1-4 scale, the DWI S scores showed very high area under the curve (AUC) (0.89) whereas regional ADC measurements in specific subregions had relatively modest predictive value. The lentiform nucleus was the region with the highest AUC (0.78). When GMDS-R were considered, DWI S scores were good to excellent predictors for some GMDS-R subscales. The predictive value of ADC measurements was both region- and subscale-specific. In particular, ADC measurements in some regions (basal ganglia, white matter or rolandic cortex) were excellent predictors for specific GMDS-R with AUCs up to 0.93. DWI S scores showed the highest prognostic value for the neurological outcome at 2 years of age. Regional ADC measurements in specific subregions proved to be highly prognostic for specific neurodevelopmental outcomes. (orig.)

  13. Prognostic value of diffusion-weighted imaging summation scores or apparent diffusion coefficient maps in newborns with hypoxic-ischemic encephalopathy.

    Science.gov (United States)

    Cavalleri, Francesca; Lugli, Licia; Pugliese, Marisa; D'Amico, Roberto; Todeschini, Alessandra; Della Casa, Elisa; Gallo, Claudio; Frassoldati, Rossella; Ferrari, Fabrizio

    2014-09-01

    The diagnostic and prognostic assessment of newborn infants with hypoxic-ischemic encephalopathy (HIE) comprises, among other tools, diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) maps. To compare the ability of DWI and ADC maps in newborns with HIE to predict the neurodevelopmental outcome at 2 years of age. Thirty-four term newborns with HIE admitted to the Neonatal Intensive Care Unit of Modena University Hospital from 2004 to 2008 were consecutively enrolled in the study. All newborns received EEG, conventional MRI and DWI within the first week of life. DWI was analyzed by means of summation (S) score and regional ADC measurements. Neurodevelopmental outcome was assessed with a standard 1-4 scale and the Griffiths Mental Developmental Scales - Revised (GMDS-R). When the outcome was evaluated with a standard 1-4 scale, the DWI S scores showed very high area under the curve (AUC) (0.89) whereas regional ADC measurements in specific subregions had relatively modest predictive value. The lentiform nucleus was the region with the highest AUC (0.78). When GMDS-R were considered, DWI S scores were good to excellent predictors for some GMDS-R subscales. The predictive value of ADC measurements was both region- and subscale-specific. In particular, ADC measurements in some regions (basal ganglia, white matter or rolandic cortex) were excellent predictors for specific GMDS-R with AUCs up to 0.93. DWI S scores showed the highest prognostic value for the neurological outcome at 2 years of age. Regional ADC measurements in specific subregions proved to be highly prognostic for specific neurodevelopmental outcomes.

  14. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  15. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  16. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  17. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  18. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    Science.gov (United States)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  19. The impact of grid and spectral nudging on the variance of the near-surface wind speed

    DEFF Research Database (Denmark)

    Vincent, Claire Louise; Hahmann, Andrea N.

    2015-01-01

    Grid and spectral nudging are effective ways of preventing drift from large scale weather patterns in regional climate models. However, the effect of nudging on the wind-speed variance is unclear. In this study, the impact of grid and spectral nudging on near-surface and upper boundary layer wind...... nudging at and above 1150 m above ground level (AGL). Nested 5 km simulations are not nudged directly, but inherit boundary conditions from the 15 km experiments. Spatial and temporal spectra show that grid nudging causes smoothing of the wind in the 15 km domain at all wavenumbers, both at 1150 m AGL...... and near the surface where nudging is not applied directly, while spectral nudging mainly affects longer wavenumbers. Maps of mesoscale variance show spatial smoothing for both grid and spectral nudging, although the effect is less pronounced for spectral nudging. On the inner, 5 km domain, an indirect...

  20. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  1. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  2. The Prediction of the Expected Current Selection Coefficient of Single Nucleotide Polymorphism Associated with Holstein Milk Yield, Fat and Protein Contents

    Directory of Open Access Journals (Sweden)

    Young-Sup Lee

    2016-01-01

    Full Text Available Milk-related traits (milk yield, fat and protein have been crucial to selection of Holstein. It is essential to find the current selection trends of Holstein. Despite this, uncovering the current trends of selection have been ignored in previous studies. We suggest a new formula to detect the current selection trends based on single nucleotide polymorphisms (SNP. This suggestion is based on the best linear unbiased prediction (BLUP and the Fisher’s fundamental theorem of natural selection both of which are trait-dependent. Fisher’s theorem links the additive genetic variance to the selection coefficient. For Holstein milk production traits, we estimated the additive genetic variance using SNP effect from BLUP and selection coefficients based on genetic variance to search highly selective SNPs. Through these processes, we identified significantly selective SNPs. The number of genes containing highly selective SNPs with p-value <0.01 (nearly top 1% SNPs in all traits and p-value <0.001 (nearly top 0.1% in any traits was 14. They are phosphodiesterase 4B (PDE4B, serine/threonine kinase 40 (STK40, collagen, type XI, alpha 1 (COL11A1, ephrin-A1 (EFNA1, netrin 4 (NTN4, neuron specific gene family member 1 (NSG1, estrogen receptor 1 (ESR1, neurexin 3 (NRXN3, spectrin, beta, non-erythrocytic 1 (SPTBN1, ADP-ribosylation factor interacting protein 1 (ARFIP1, mutL homolog 1 (MLH1, transmembrane channel-like 7 (TMC7, carboxypeptidase X, member 2 (CPXM2 and ADAM metallopeptidase domain 12 (ADAM12. These genes may be important for future artificial selection trends. Also, we found that the SNP effect predicted from BLUP was the key factor to determine the expected current selection coefficient of SNP. Under Hardy-Weinberg equilibrium of SNP markers in current generation, the selection coefficient is equivalent to 2*SNP effect.

  3. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  4. Mean-Variance stochastic goal programming for sustainable mutual funds' portfolio selection.

    Directory of Open Access Journals (Sweden)

    García-Bernabeu, Ana

    2015-11-01

    Full Text Available Mean-Variance Stochastic Goal Programming models (MV-SGP provide satisficing investment solutions in uncertain contexts. In this work, an MV-SGP model is proposed for portfolio selection which includes goals with regards to traditional and sustainable assets. The proposed approach is based on a two-step procedure. In the first step, sustainability and/or financial screens are applied to a set of assets (mutual funds previously evaluated with TOPSIS to determine the opportunity set. In a second step, satisficing portfolios of assets are obtained using a Goal Programming approach. Two different goals are considered. The first goal reflects only the purely financial side of the target while the second goal is referred to the sustainable side. Aversion to Risk Absolute (ARA coefficients are estimated and incorporated in our investment decision making approach using two different approaches.

  5. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  6. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  7. Coefficients of productivity for Yellowstone's grizzly bear habitat

    Science.gov (United States)

    Mattson, David John; Barber, Kim; Maw, Ralene; Renkin, Roy

    2004-01-01

    This report describes methods for calculating coefficients used to depict habitat productivity for grizzly bears in the Yellowstone ecosystem. Calculations based on these coefficients are used in the Yellowstone Grizzly Bear Cumulative Effects Model to map the distribution of habitat productivity and account for the impacts of human facilities. The coefficients of habitat productivity incorporate detailed information that was collected over a 20-year period (1977-96) on the foraging behavior of Yellowstone's bears and include records of what bears were feeding on, when and where they fed, the extent of that feeding activity, and relative measures of the quantity consumed. The coefficients also incorporate information, collected primarily from 1986 to 1992, on the nutrient content of foods that were consumed, their digestibility, characteristic bite sizes, and the energy required to extract and handle each food. Coefficients were calculated for different time periods and different habitat types, specific to different parts of the Yellowstone ecosystem. Stratifications included four seasons of bear activity (spring, estrus, early hyperphagia, late hyperphagia), years when ungulate carrion and whitebark pine seed crops were abundant versus not, areas adjacent to (bear activity in each region, habitat type, and time period were incorporated into calculations, controlling for the effects of proximity to human facilities. The coefficients described in this report and associated estimates of grizzly bear habitat productivity are unique among many efforts to model the conditions of bear habitat because calculations include information on energetics derived from the observed behavior of radio-marked bears.

  8. Effect of Variable Manning Coefficients on Tsunami Inundation

    Science.gov (United States)

    Barberopoulou, A.; Rees, D.

    2017-12-01

    Numerical simulations are commonly used to help estimate tsunami hazard, improve evacuation plans, issue or cancel tsunami warnings, inform forecasting and hazard assessments and have therefore become an integral part of hazard mitigation among the tsunami community. Many numerical codes exist for simulating tsunamis, most of which have undergone extensive benchmarking and testing. Tsunami hazard or risk assessments employ these codes following a deterministic or probabilistic approach. Depending on the scope these studies may or may not consider uncertainty in the numerical simulations, the effects of tides, variable friction or estimate financial losses, none of which are necessarily trivial. Distributed manning coefficients, the roughness coefficients used in hydraulic modeling, are commonly used in simulating both riverine and pluvial flood events however, their use in tsunami hazard assessments is primarily part of limited scope studies and for the most part, not a standard practice. For this work, we investigate variations in manning coefficients and their effects on tsunami inundation extent, pattern and financial loss. To assign manning coefficients we use land use maps that come from the New Zealand Land Cover Database (LCDB) and more recent data from the Ministry of the Environment. More than 40 classes covering different types of land use are combined into major classes such as cropland, grassland and wetland representing common types of land use in New Zealand, each of which is assigned a unique manning coefficient. By utilizing different data sources for variable manning coefficients, we examine the impact of data sources and classification methodology on the accuracy of model outputs.

  9. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  10. Isotropy analyses of the Planck convergence map

    Science.gov (United States)

    Marques, G. A.; Novaes, C. P.; Bernui, A.; Ferreira, I. S.

    2018-01-01

    The presence of matter in the path of relic photons causes distortions in the angular pattern of the cosmic microwave background (CMB) temperature fluctuations, modifying their properties in a slight but measurable way. Recently, the Planck Collaboration released the estimated convergence map, an integrated measure of the large-scale matter distribution that produced the weak gravitational lensing (WL) phenomenon observed in Planck CMB data. We perform exhaustive analyses of this convergence map calculating the variance in small and large regions of the sky, but excluding the area masked due to Galactic contaminations, and compare them with the features expected in the set of simulated convergence maps, also released by the Planck Collaboration. Our goal is to search for sky directions or regions where the WL imprints anomalous signatures to the variance estimator revealed through a χ2 analyses at a statistically significant level. In the local analysis of the Planck convergence map, we identified eight patches of the sky in disagreement, in more than 2σ, with what is observed in the average of the simulations. In contrast, in the large regions analysis we found no statistically significant discrepancies, but, interestingly, the regions with the highest χ2 values are surrounding the ecliptic poles. Thus, our results show a good agreement with the features expected by the Λ cold dark matter concordance model, as given by the simulations. Yet, the outliers regions found here could suggest that the data still contain residual contamination, like noise, due to over- or underestimation of systematic effects in the simulation data set.

  11. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  12. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  13. Amplified fragment length polymorphism mapping of quantitative trait loci for malaria parasite susceptibility in the yellow fever mosquito Aedes aegypti.

    Science.gov (United States)

    Zhong, Daibin; Menge, David M; Temu, Emmanuel A; Chen, Hong; Yan, Guiyun

    2006-07-01

    The yellow fever mosquito Aedes aegypti has been the subject of extensive genetic research due to its medical importance and the ease with which it can be manipulated in the laboratory. A molecular genetic linkage map was constructed using 148 amplified fragment length polymorphism (AFLP) and six single-strand conformation polymorphism (SSCP) markers. Eighteen AFLP primer combinations were used to genotype two reciprocal F2 segregating populations. Each primer combination generated an average of 8.2 AFLP markers eligible for linkage mapping. The length of the integrated map was 180.9 cM, giving an average marker resolution of 1.2 cM. Composite interval mapping revealed a total of six QTL significantly affecting Plasmodium susceptibility in the two reciprocal crosses of Ae. aegypti. Two common QTL on linkage group 2 were identified in both crosses that had similar effects on the phenotype, and four QTL were unique to each cross. In one cross, the four main QTL accounted for 64% of the total phenotypic variance, and digenic epistasis explained 11.8% of the variance. In the second cross, the four main QTL explained 66% of the variance, and digenic epistasis accounted for 16% of the variance. The actions of these QTL were either dominance or underdominance. Our results indicated that at least three new QTL were mapped on chromosomes 1 and 3. The polygenic nature of susceptibility to P. gallinaceum and epistasis are important factors for significant variation within or among mosquito strains. The new map provides additional information useful for further genetic investigation, such as identification of new genes and positional cloning.

  14. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  15. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  16. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  17. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  18. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    Science.gov (United States)

    Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  19. Comparison of gene-based rare variant association mapping methods for quantitative traits in a bovine population with complex familial relationships.

    Science.gov (United States)

    Zhang, Qianqian; Guldbrandtsen, Bernt; Calus, Mario P L; Lund, Mogens Sandø; Sahana, Goutam

    2016-08-17

    There is growing interest in the role of rare variants in the variation of complex traits due to increasing evidence that rare variants are associated with quantitative traits. However, association methods that are commonly used for mapping common variants are not effective to map rare variants. Besides, livestock populations have large half-sib families and the occurrence of rare variants may be confounded with family structure, which makes it difficult to disentangle their effects from family mean effects. We compared the power of methods that are commonly applied in human genetics to map rare variants in cattle using whole-genome sequence data and simulated phenotypes. We also studied the power of mapping rare variants using linear mixed models (LMM), which are the method of choice to account for both family relationships and population structure in cattle. We observed that the power of the LMM approach was low for mapping a rare variant (defined as those that have frequencies lower than 0.01) with a moderate effect (5 to 8 % of phenotypic variance explained by multiple rare variants that vary from 5 to 21 in number) contributing to a QTL with a sample size of 1000. In contrast, across the scenarios studied, statistical methods that are specialized for mapping rare variants increased power regardless of whether multiple rare variants or a single rare variant underlie a QTL. Different methods for combining rare variants in the test single nucleotide polymorphism set resulted in similar power irrespective of the proportion of total genetic variance explained by the QTL. However, when the QTL variance is very small (only 0.1 % of the total genetic variance), these specialized methods for mapping rare variants and LMM generally had no power to map the variants within a gene with sample sizes of 1000 or 5000. We observed that the methods that combine multiple rare variants within a gene into a meta-variant generally had greater power to map rare variants compared

  20. A SEARCH FOR CONCENTRIC CIRCLES IN THE 7 YEAR WILKINSON MICROWAVE ANISOTROPY PROBE TEMPERATURE SKY MAPS

    International Nuclear Information System (INIS)

    Wehus, I. K.; Eriksen, H. K.

    2011-01-01

    In this Letter, we search for concentric circles with low variance in cosmic microwave background sky maps. The detection of such circles would hint at new physics beyond the current cosmological concordance model, which states that the universe is isotropic and homogeneous, and filled with Gaussian fluctuations. We first describe a set of methods designed to detect such circles, based on matched filters and χ 2 statistics, and then apply these methods to the best current publicly available data, the 7 year Wilkinson Microwave Anisotropy Probe (WMAP) temperature sky maps. We compare the observations with an ensemble of 1000 Gaussian ΛCDM simulations. Based on these tests, we conclude that the WMAP sky maps are fully compatible with the Gaussian and isotropic hypothesis as measured by low-variance ring statistics.

  1. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  2. Steiner symmetrization and the initial coefficients of univalent functions

    International Nuclear Information System (INIS)

    Dubinin, Vladimir N

    2010-01-01

    We establish the inequality |a 1 | 2 -Rea 1 a -1 ≥|a 1 *| 2 -Rea 1 *a -1 * for the initial coefficients of any function f(z)=a 1 z+a 0 +a -1 /z+? meromorphic and univalent in the domain D={z:|z|>1}, where a 1 * and a -1 * are the corresponding coefficients in the expansion of the function f*(z) that maps the domain D conformally and univalently onto the exterior of the result of the Steiner symmetrization with respect to the real axis of the complement of the set f(D). The Polya-Szego inequality |a 1 |≥|a 1 *| is already known. We describe some applications of our inequality to functions of class Σ.

  3. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  4. Imaging and assessment of diffusion coefficients by magnetic resonance

    International Nuclear Information System (INIS)

    Tintera, J.; Dezortova, M.; Hajek, M.; Fitzek, C.

    1999-01-01

    The problem of assessment of molecular diffusion by magnetic resonance is highlighted and some typical applications of diffusion imaging in the diagnosis, e.g., of cerebral ischemia, changes in patients with phenylketonuria or multiple sclerosis are discussed. The images were obtained by using diffusion weighted spin echo Echo-Planar Imaging sequence with subsequent correction of the geometrical distortion of the images and calculation of the Apparent Diffusion Coefficient map

  5. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  6. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  7. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  8. Flow map and measurement of void fraction and heat transfer coefficient using an image analysis technique for flow boiling of water in a silicon microchannel

    International Nuclear Information System (INIS)

    Singh, S G; Duttagupta, S P; Jain, A; Sridharan, A; Agrawal, Amit

    2009-01-01

    The present work focuses on the generation of the flow regime map for two-phase water flow in microchannels of a hydraulic diameter of 140 µm. An image analysis algorithm has been developed and utilized to obtain the local void fraction. The image processing technique is also employed to identify and estimate the percentage of different flow regimes and heat transfer coefficient, as a function of position, heat flux and mass flow rate. Both void fraction and heat transfer coefficient are found to increase monotonically along the length of the microchannel. At low heat flux and low flow rates, bubbly, slug and annular flow regimes are apparent. However, the flow is predominately annular at high heat flux and high flow rate. A breakup of the flow frequency suggests that the flow is bistable in the annular regime, in that at a fixed location, the flow periodically switches from single-phase liquid to annular and vice versa. Otherwise, the occurrence of three regimes—single-phase liquid, bubbly and slug are observed. These results provide several useful insights about two-phase flow in microchannels besides being of fundamental interest

  9. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  10. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  11. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  12. Obtaining parton distribution functions from self-organizing maps

    International Nuclear Information System (INIS)

    Honkanen, H.; Liuti, S.; Loitiere, Y.C.; Brogan, D.; Reynolds, P.

    2007-01-01

    We present an alternative algorithm to global fitting procedures to construct Parton Distribution Functions parametrizations. The proposed algorithm uses Self-Organizing Maps which at variance with the standard Neural Networks, are based on competitive-learning. Self-Organizing Maps generate a non-uniform projection from a high dimensional data space onto a low dimensional one (usually 1 or 2 dimensions) by clustering similar PDF representations together. The SOMs are trained on progressively narrower selections of data samples. The selection criterion is that of convergence towards a neighborhood of the experimental data. All available data sets on deep inelastic scattering in the kinematical region of 0.001 ≤ x ≤ 0.75, and 1 ≤ Q 2 ≤ 100 GeV 2 , with a cut on the final state invariant mass, W 2 ≥ 10 GeV 2 were implemented. The proposed fitting procedure, at variance with standard neural network approaches, allows for an increased control of the systematic bias by enabling the user to directly control the data selection procedure at various stages of the process. (author)

  13. Asymptotic properties of Pearson's rank-variate correlation coefficient under contaminated Gaussian model.

    Science.gov (United States)

    Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu

    2014-01-01

    This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.

  14. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  15. Long time correlations in standard mapping

    International Nuclear Information System (INIS)

    Rolland, P.

    1985-09-01

    Using an original method based on a statistics of runs, we have shown the existence of long time correlations in the Standard Mapping, as well as the role they play in the increase of the diffusion coefficient [fr

  16. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  17. Predicting railway wheel wear under uncertainty of wear coefficient, using universal kriging

    International Nuclear Information System (INIS)

    Cremona, Marzia A.; Liu, Binbin; Hu, Yang; Bruni, Stefano; Lewis, Roger

    2016-01-01

    Railway wheel wear prediction is essential for reliability and optimal maintenance strategies of railway systems. Indeed, an accurate wear prediction can have both economic and safety implications. In this paper we propose a novel methodology, based on Archard's equation and a local contact model, to forecast the volume of material worn and the corresponding wheel remaining useful life (RUL). A universal kriging estimate of the wear coefficient is embedded in our method. Exploiting the dependence of wear coefficient measurements with similar contact pressure and sliding speed, we construct a continuous wear coefficient map that proves to be more informative than the ones currently available in the literature. Moreover, this approach leads to an uncertainty analysis on the wear coefficient. As a consequence, we are able to construct wear prediction intervals that provide reasonable guidelines in practice. - Highlights: • Wear prediction is of outmost importance for reliability of railway systems. • Wear coefficient is essential in prediction through Archard's equation. • A novel methodology is developed to predict wear and RUL. • Universal kriging is used for wear coefficient and uncertainty estimation. • A simulation study and a real case application are provided.

  18. Trans-ethnic fine-mapping of lipid loci identifies population-specific signals and allelic heterogeneity that increases the trait variance explained.

    Directory of Open Access Journals (Sweden)

    Ying Wu

    2013-03-01

    Full Text Available Genome-wide association studies (GWAS have identified ~100 loci associated with blood lipid levels, but much of the trait heritability remains unexplained, and at most loci the identities of the trait-influencing variants remain unknown. We conducted a trans-ethnic fine-mapping study at 18, 22, and 18 GWAS loci on the Metabochip for their association with triglycerides (TG, high-density lipoprotein cholesterol (HDL-C, and low-density lipoprotein cholesterol (LDL-C, respectively, in individuals of African American (n = 6,832, East Asian (n = 9,449, and European (n = 10,829 ancestry. We aimed to identify the variants with strongest association at each locus, identify additional and population-specific signals, refine association signals, and assess the relative significance of previously described functional variants. Among the 58 loci, 33 exhibited evidence of association at P<1 × 10(-4 in at least one ancestry group. Sequential conditional analyses revealed that ten, nine, and four loci in African Americans, Europeans, and East Asians, respectively, exhibited two or more signals. At these loci, accounting for all signals led to a 1.3- to 1.8-fold increase in the explained phenotypic variance compared to the strongest signals. Distinct signals across ancestry groups were identified at PCSK9 and APOA5. Trans-ethnic analyses narrowed the signals to smaller sets of variants at GCKR, PPP1R3B, ABO, LCAT, and ABCA1. Of 27 variants reported previously to have functional effects, 74% exhibited the strongest association at the respective signal. In conclusion, trans-ethnic high-density genotyping and analysis confirm the presence of allelic heterogeneity, allow the identification of population-specific variants, and limit the number of candidate SNPs for functional studies.

  19. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  20. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  1. Topology of unitary groups and the prime orders of binomial coefficients

    Science.gov (United States)

    Duan, HaiBao; Lin, XianZu

    2017-09-01

    Let $c:SU(n)\\rightarrow PSU(n)=SU(n)/\\mathbb{Z}_{n}$ be the quotient map of the special unitary group $SU(n)$ by its center subgroup $\\mathbb{Z}_{n}$. We determine the induced homomorphism $c^{\\ast}:$ $H^{\\ast}(PSU(n))\\rightarrow H^{\\ast}(SU(n))$ on cohomologies by computing with the prime orders of binomial coefficients

  2. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  3. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  4. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  5. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  6. On-line extraction of the variance caused by burn-up in in-core three-dimensional power distribution

    International Nuclear Information System (INIS)

    Wang Yaqi; Luo Zhengpei; Li Fu; Liu Wenfeng

    2001-01-01

    In most of PWRs, the ex-core ion-chambers are the sole real-time sensors to respond to in-core power and its axial offset. However, the calibration coefficient of the ion-chambers depends on the (3D) power distribution and varies with the burn-up. People expect to know the variance in distribution caused by burn-up directly from the signals of ion-chambers. This expectation is not realized as yet, because an ion-chamber almost only responds to its nearest fuel assemblies. The authors then developed a two-step method for burn-up characteristic extraction: the harmonics synthesis method and harmonics' burn-up grouping. Using the extracted burn-up characteristics, the relationship between the readings of the ex-core ion-chambers and the in-core 3D power distribution is set up. Through the simulation on the heating reactor, the method of burn-up characteristic extraction is verified under engineering conditions. It is possible to on-line extract the variance caused by burn-up in 3D power distribution

  7. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  8. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  9. AFM friction and adhesion mapping of the substructures of human hair cuticles

    International Nuclear Information System (INIS)

    Smith, James R.; Tsibouklis, John; Nevell, Thomas G.; Breakspear, Steven

    2013-01-01

    Using atomic force microscopy, values of the microscale friction coefficient, the tip (silicon nitride) - surface adhesion force and the corresponding adhesion energy, for the substructures that constitute the surface of human hair (European brown hair) have been determined from Amonton plots. The values, mapped for comparison with surface topography, corresponded qualitatively with the substructures’ plane surface characteristics. Localised maps and values of the frictional coefficient, extracted avoiding scale edge effects, are likely to inform the formulation of hair-care products and treatments.

  10. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  11. Phase synchronization in inhomogeneous globally coupled map lattices

    International Nuclear Information System (INIS)

    Ho Mingchung; Hung Yaochen; Jiang, I-M.

    2004-01-01

    The study of inhomogeneous-coupled chaotic systems has attracted a lot of attention recently. With simple definition of phase, we present the phase-locking behavior in ensembles of globally coupled non-identical maps. The inhomogeneous globally coupled maps consist of logistic map and tent map simultaneously. Average phase synchronization ratios, which are used to characterize the phase coherent phenomena, depend on different coupling coefficients and chaotic parameters. By using interdependence, the relationship between a single unit and the mean field is illustrated. Moreover, we take the effect of external noise and parameter mismatch into consideration and present the results by numerical simulation

  12. Increasing Efficiency of Soil Fertility Map for Rice Cultivation Using Fuzzy Logic, AHP and GIS

    Directory of Open Access Journals (Sweden)

    javad seyedmohammadi

    2017-02-01

    Full Text Available Introduction: With regard to increasing population of country, need to high agricultural production is essential. The most suitable method for this issue is high production per area unit. Preparation much food and other environmental resources with conservation of biotic resources for futures will be possible only with optimum exploitation of soil. Among effective factors for the most production balanced addition of fertilizers increases production of crops higher than the others. With attention to this topic, determination of soil fertility degree is essential tobetter use of fertilizers and right exploitation of soils. Using fuzzy logic and Analytic Hierarchy Process (AHP could be useful in accurate determination of soil fertility degree. Materials and Methods: The study area (at the east of Rasht city is located between 49° 31' to 49° 45' E longitude and 37° 7' to 37° 27' N latitude in north of Guilan Province, northern Iran, in the southern coast of the Caspian sea. 117 soil samples were derived from0-30 cm depth in the study area. Air-dried soil samples were crushed and passed through a 2mm sieve. Available phosphorus, potassium and organic carbon were determined by sodium bicarbonate, normal ammonium acetate and corrected walkly-black method, respectively. In the first stage, the interpolation of data was done by kriging method in GIS context. Then S-shape membership function was defined for each parameter and prepared fuzzy map. After determination of membership function weight parameters maps were determined using AHP technique and finally soil fertility map was prepared with overlaying of weighted fuzzy maps. Relative variance and correlation coefficient criteria used tocontrol groups separation accuracy in fuzzy fertility map. Results and Discussion: With regard to minimum amounts of parameters looks some lands of study area had fertility difficulty. Therefore, soil fertility map of study area distinct these lands and present soil

  13. QTL mapping in white spruce: gene maps and genomic regions underlying adaptive traits across pedigrees, years and environments

    Science.gov (United States)

    2011-01-01

    Background The genomic architecture of bud phenology and height growth remains poorly known in most forest trees. In non model species, QTL studies have shown limited application because most often QTL data could not be validated from one experiment to another. The aim of our study was to overcome this limitation by basing QTL detection on the construction of genetic maps highly-enriched in gene markers, and by assessing QTLs across pedigrees, years, and environments. Results Four saturated individual linkage maps representing two unrelated mapping populations of 260 and 500 clonally replicated progeny were assembled from 471 to 570 markers, including from 283 to 451 gene SNPs obtained using a multiplexed genotyping assay. Thence, a composite linkage map was assembled with 836 gene markers. For individual linkage maps, a total of 33 distinct quantitative trait loci (QTLs) were observed for bud flush, 52 for bud set, and 52 for height growth. For the composite map, the corresponding numbers of QTL clusters were 11, 13, and 10. About 20% of QTLs were replicated between the two mapping populations and nearly 50% revealed spatial and/or temporal stability. Three to four occurrences of overlapping QTLs between characters were noted, indicating regions with potential pleiotropic effects. Moreover, some of the genes involved in the QTLs were also underlined by recent genome scans or expression profile studies. Overall, the proportion of phenotypic variance explained by each QTL ranged from 3.0 to 16.4% for bud flush, from 2.7 to 22.2% for bud set, and from 2.5 to 10.5% for height growth. Up to 70% of the total character variance could be accounted for by QTLs for bud flush or bud set, and up to 59% for height growth. Conclusions This study provides a basic understanding of the genomic architecture related to bud flush, bud set, and height growth in a conifer species, and a useful indicator to compare with Angiosperms. It will serve as a basic reference to functional and

  14. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  15. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  16. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  17. Experiment of flow regime map and local condensing heat transfer coefficients inside three dimensional inner microfin tubes

    Science.gov (United States)

    Du, Yang; Xin, Ming Dao

    1999-03-01

    This paper developed a new type of three dimensional inner microfin tube. The experimental results of the flow patterns for the horizontal condensation inside these tubes are reported in the paper. The flow patterns for the horizontal condensation inside the new made tubes are divided into annular flow, stratified flow and intermittent flow within the test conditions. The experiments of the local heat transfer coefficients for the different flow patterns have been systematically carried out. The experiments of the local heat transfer coefficients changing with the vapor dryness fraction have also been carried out. As compared with the heat transfer coefficients of the two dimensional inner microfin tubes, those of the three dimensional inner microfin tubes increase 47-127% for the annular flow region, 38-183% for the stratified flow and 15-75% for the intermittent flow, respectively. The enhancement factor of the local heat transfer coefficients is from 1.8-6.9 for the vapor dryness fraction from 0.05 to 1.

  18. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  19. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  20. A high-density linkage map and QTL mapping of fruit-related traits in pumpkin (Cucurbita moschata Duch.).

    Science.gov (United States)

    Zhong, Yu-Juan; Zhou, Yang-Yang; Li, Jun-Xing; Yu, Ting; Wu, Ting-Quan; Luo, Jian-Ning; Luo, Shao-Bo; Huang, He-Xun

    2017-10-06

    Pumpkin (Cucurbita moschata) is an economically worldwide crop. Few quantitative trait loci (QTLs) were reported previously due to the lack of genomic and genetic resources. In this study, a high-density linkage map of C. moschata was structured by double-digest restriction site-associated DNA sequencing, using 200 F2 individuals of CMO-1 × CMO-97. By filtering 74,899 SNPs, a total of 3,470 high quality SNP markers were assigned to the map spanning a total genetic distance of 3087.03 cM on 20 linkage groups (LGs) with an average genetic distance of 0.89 cM. Based on this map, both pericarp color and strip were fined mapped to a novel single locus on LG8 in the same region of 0.31 cM with phenotypic variance explained (PVE) of 93.6% and 90.2%, respectively. QTL analysis was also performed on carotenoids, sugars, tuberculate fruit, fruit diameter, thickness and chamber width with a total of 12 traits. 29 QTLs distributed in 9 LGs were detected with PVE from 9.6% to 28.6%. It was the first high-density linkage SNP map for C. moschata which was proved to be a valuable tool for gene or QTL mapping. This information will serve as significant basis for map-based gene cloning, draft genome assembling and molecular breeding.

  1. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  2. Prediction of octanol-air partition coefficients for polychlorinated biphenyls (PCBs) using 3D-QSAR models.

    Science.gov (United States)

    Chen, Ying; Cai, Xiaoyu; Jiang, Long; Li, Yu

    2016-02-01

    Based on the experimental data of octanol-air partition coefficients (KOA) for 19 polychlorinated biphenyl (PCB) congeners, two types of QSAR methods, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), are used to establish 3D-QSAR models using the structural parameters as independent variables and using logKOA values as the dependent variable with the Sybyl software to predict the KOA values of the remaining 190 PCB congeners. The whole data set (19 compounds) was divided into a training set (15 compounds) for model generation and a test set (4 compounds) for model validation. As a result, the cross-validation correlation coefficient (q(2)) obtained by the CoMFA and CoMSIA models (shuffled 12 times) was in the range of 0.825-0.969 (>0.5), the correlation coefficient (r(2)) obtained was in the range of 0.957-1.000 (>0.9), and the SEP (standard error of prediction) of test set was within the range of 0.070-0.617, indicating that the models were robust and predictive. Randomly selected from a set of models, CoMFA analysis revealed that the corresponding percentages of the variance explained by steric and electrostatic fields were 23.9% and 76.1%, respectively, while CoMSIA analysis by steric, electrostatic and hydrophobic fields were 0.6%, 92.6%, and 6.8%, respectively. The electrostatic field was determined as a primary factor governing the logKOA. The correlation analysis of the relationship between the number of Cl atoms and the average logKOA values of PCBs indicated that logKOA values gradually increased as the number of Cl atoms increased. Simultaneously, related studies on PCB detection in the Arctic and Antarctic areas revealed that higher logKOA values indicate a stronger PCB migration ability. From CoMFA and CoMSIA contour maps, logKOA decreased when substituents possessed electropositive groups at the 2-, 3-, 3'-, 5- and 6- positions, which could reduce the PCB migration ability. These results are

  3. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  4. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    Science.gov (United States)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  5. In core system mapping reactor power distribution

    International Nuclear Information System (INIS)

    Yoriyaz, H.; Moreira, J.M.L.

    1989-01-01

    Based on the signals of SPND'S (Self Powered Neutron Detectors) distributed inside of a core, the spatial power distribution is obtained using the MAP program, developed in this work. The methodology applied in MAP program uses a least mean square technique to calculate expansion coefficients that depend on the SPND'S signals. The final power or neutron flux distribution is obtained by a combination of certains functions or expansion modes that are provided from diffusion calculation with the CITATION code. The MAP program is written in PASCAL language and will be used in IEA-R1 reactor for assisting its operation. (author) [pt

  6. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  7. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  8. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  9. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  10. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  11. Clustering stock market companies via chaotic map synchronization

    Science.gov (United States)

    Basalto, N.; Bellotti, R.; De Carlo, F.; Facchi, P.; Pascazio, S.

    2005-01-01

    A pairwise clustering approach is applied to the analysis of the Dow Jones index companies, in order to identify similar temporal behavior of the traded stock prices. To this end, the chaotic map clustering algorithm is used, where a map is associated to each company and the correlation coefficients of the financial time series to the coupling strengths between maps. The simulation of a chaotic map dynamics gives rise to a natural partition of the data, as companies belonging to the same industrial branch are often grouped together. The identification of clusters of companies of a given stock market index can be exploited in the portfolio optimization strategies.

  12. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  13. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  14. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  15. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  16. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  17. A low-dimensional tool for predicting force decomposition coefficients for varying inflow conditions

    KAUST Repository

    Ghommem, Mehdi

    2013-01-01

    We develop a low-dimensional tool to predict the effects of unsteadiness in the inflow on force coefficients acting on a circular cylinder using proper orthogonal decomposition (POD) modes from steady flow simulations. The approach is based on combining POD and linear stochastic estimator (LSE) techniques. We use POD to derive a reduced-order model (ROM) to reconstruct the velocity field. To overcome the difficulty of developing a ROM using Poisson\\'s equation, we relate the pressure field to the velocity field through a mapping function based on LSE. The use of this approach to derive force decomposition coefficients (FDCs) under unsteady mean flow from basis functions of the steady flow is illustrated. For both steady and unsteady cases, the final outcome is a representation of the lift and drag coefficients in terms of velocity and pressure temporal coefficients. Such a representation could serve as the basis for implementing control strategies or conducting uncertainty quantification. Copyright © 2013 Inderscience Enterprises Ltd.

  18. A low-dimensional tool for predicting force decomposition coefficients for varying inflow conditions

    KAUST Repository

    Ghommem, Mehdi; Akhtar, Imran; Hajj, M. R.

    2013-01-01

    We develop a low-dimensional tool to predict the effects of unsteadiness in the inflow on force coefficients acting on a circular cylinder using proper orthogonal decomposition (POD) modes from steady flow simulations. The approach is based on combining POD and linear stochastic estimator (LSE) techniques. We use POD to derive a reduced-order model (ROM) to reconstruct the velocity field. To overcome the difficulty of developing a ROM using Poisson's equation, we relate the pressure field to the velocity field through a mapping function based on LSE. The use of this approach to derive force decomposition coefficients (FDCs) under unsteady mean flow from basis functions of the steady flow is illustrated. For both steady and unsteady cases, the final outcome is a representation of the lift and drag coefficients in terms of velocity and pressure temporal coefficients. Such a representation could serve as the basis for implementing control strategies or conducting uncertainty quantification. Copyright © 2013 Inderscience Enterprises Ltd.

  19. Determination of air-loop volume and radon partition coefficient for measuring radon in water sample.

    Science.gov (United States)

    Lee, Kil Yong; Burnett, William C

    A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 °C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H 2 O and BigBottle RAD-H 2 O. The results have shown good agreement between this method and the standard methods.

  20. Determination of air-loop volume and radon partition coefficient for measuring radon in water sample

    International Nuclear Information System (INIS)

    Kil Yong Lee; Burnett, W.C.

    2013-01-01

    A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 deg C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H 2 O and BigBottle RAD-H 2 O. The results have shown good agreement between this method and the standard methods. (author)

  1. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  2. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  3. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  4. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  5. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  6. Quantitative Assessment of Tendon Healing by Using MR T2 Mapping in a Rabbit Achilles Tendon Transection Model Treated with Platelet-rich Plasma.

    Science.gov (United States)

    Fukawa, Taisuke; Yamaguchi, Satoshi; Watanabe, Atsuya; Sasho, Takahisa; Akagi, Ryuichiro; Muramatsu, Yuta; Akatsu, Yorikazu; Katsuragi, Joe; Endo, Jun; Osone, Fumio; Sato, Yasunori; Okubo, Toshiyuki; Takahashi, Kazuhisa

    2015-09-01

    To determine if magnetic resonance (MR) imaging T2 mapping can be used to quantify histologic tendon healing by using a rabbit Achilles tendon transection model treated with platelet-rich plasma (PRP). Experiments were approved by the Institutional Animal Care and Use Committee. The Achilles tendons of 24 New Zealand white rabbits (48 limbs) were surgically transected, and PRP (in the test group) or saline (in the control group) was injected into the transection site. The rabbits were sacrificed 2, 4, 8, and 12 weeks after surgery. Thereafter, T2 mapping and histologic evaluations were performed by using the Bonar scale. A mixed-model multivariate analysis of variance was used to test the effects of time and PRP treatment on the T2 value and Bonar grade, respectively. The correlation between the T2 value and Bonar grade was also assessed by using the Spearman correlation coefficient. The Bonar scale values decreased in both groups during tendon healing. The T2 value also shortened over time (P tendon healing. While T2 and Bonar grade were lower at all time points in tendons treated with PRP, there was no significant difference between the treatment and control tendons.

  7. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  8. Partitioning and mapping uncertainties in ensembles of forecasts of species turnover under climate change

    DEFF Research Database (Denmark)

    Diniz-Filho, José Alexandre F.; Bini, Luis Mauricio; Rangel, Thiago Fernando

    2009-01-01

    Forecasts of species range shifts under climate change are fraught with uncertainties and ensemble forecasting may provide a framework to deal with such uncertainties. Here, a novel approach to partition the variance among modeled attributes, such as richness or turnover, and map sources of uncer......Forecasts of species range shifts under climate change are fraught with uncertainties and ensemble forecasting may provide a framework to deal with such uncertainties. Here, a novel approach to partition the variance among modeled attributes, such as richness or turnover, and map sources...... of uncertainty in ensembles of forecasts is presented. We model the distributions of 3837 New World birds and project them into 2080. We then quantify and map the relative contribution of different sources of uncertainty from alternative methods for niche modeling, general circulation models (AOGCM......), and emission scenarios. The greatest source of uncertainty in forecasts of species range shifts arises from using alternative methods for niche modeling, followed by AOGCM, and their interaction. Our results concur with previous studies that discovered that projections from alternative models can be extremely...

  9. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  10. hydrogeological map of kabo sheet 80 nw topographical sheet 1

    African Journals Online (AJOL)

    DR. AMINU

    runoff average of 216,240,192m3/a and mean base flow of 114,455m3/a, and surface runoff mean of 159, 228,113m3/a, also ... Key words: Hydro geological maps, Configurations maps, Hydro years, Base flow, Coefficient of base flow and Hydraulic ..... impounding reservoirs of four earth fill dams (colloquially called dams) ...

  11. Beyond CMB cosmic variance limits on reionization with the polarized Sunyaev-Zel'dovich effect

    Science.gov (United States)

    Meyers, Joel; Meerburg, P. Daniel; van Engelen, Alexander; Battaglia, Nicholas

    2018-05-01

    Upcoming cosmic microwave background (CMB) surveys will soon make the first detection of the polarized Sunyaev-Zel'dovich effect, the linear polarization generated by the scattering of CMB photons on the free electrons present in collapsed objects. Measurement of this polarization along with knowledge of the electron density of the objects allows a determination of the quadrupolar temperature anisotropy of the CMB as viewed from the space-time location of the objects. Maps of these remote temperature quadrupoles have several cosmological applications. Here we propose a new application: the reconstruction of the cosmological reionization history. We show that with quadrupole measurements out to redshift 3, constraints on the mean optical depth can be improved by an order of magnitude beyond the CMB cosmic variance limit.

  12. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  13. Multi-soliton solutions to the modified nonlinear Schrödinger equation with variable coefficients in inhomogeneous fibers

    International Nuclear Information System (INIS)

    Dai, Chao-Qing; Qin, Zhen-Yun; Zheng, Chun-Long

    2012-01-01

    Multi-soliton solutions to the modified nonlinear Schrödinger equation (MNLSE) with variable coefficients (VCs) in inhomogeneous fibers are obtained with the help of mapping transformation, which reduces the VC MNLSE into a constant-coefficient MNLSE. Based on the analytical solutions, one- and two-soliton transmissions in the proper dispersion management systems are discussed. The sustainment of solitons and the disappearance of breathers for the VC MNLSE are first reported here. (paper)

  14. VMF3/GPT3: refined discrete and empirical troposphere mapping functions

    Science.gov (United States)

    Landskron, Daniel; Böhm, Johannes

    2018-04-01

    Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of 3°. Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a 5°× 5° as well as a 1°× 1° global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.

  15. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  16. Good genes and sexual selection in dung beetles (Onthophagus taurus: genetic variance in egg-to-adult and adult viability.

    Directory of Open Access Journals (Sweden)

    Francisco Garcia-Gonzalez

    2011-01-01

    Full Text Available Whether species exhibit significant heritable variation in fitness is central for sexual selection. According to good genes models there must be genetic variation in males leading to variation in offspring fitness if females are to obtain genetic benefits from exercising mate preferences, or by mating multiply. However, sexual selection based on genetic benefits is controversial, and there is limited unambiguous support for the notion that choosy or polyandrous females can increase the chances of producing offspring with high viability. Here we examine the levels of additive genetic variance in two fitness components in the dung beetle Onthophagus taurus. We found significant sire effects on egg-to-adult viability and on son, but not daughter, survival to sexual maturity, as well as moderate coefficients of additive variance in these traits. Moreover, we do not find evidence for sexual antagonism influencing genetic variation for fitness. Our results are consistent with good genes sexual selection, and suggest that both pre- and postcopulatory mate choice, and male competition could provide indirect benefits to females.

  17. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  18. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  19. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  20. Genetic mapping and identification of QTL for earliness in the globe artichoke/cultivated cardoon complex.

    Science.gov (United States)

    Portis, Ezio; Scaglione, Davide; Acquadro, Alberto; Mauromicale, Giovanni; Mauro, Rosario; Knapp, Steven J; Lanteri, Sergio

    2012-05-23

    The Asteraceae species Cynara cardunculus (2n = 2x = 34) includes the two fully cross-compatible domesticated taxa globe artichoke (var. scolymus L.) and cultivated cardoon (var. altilis DC). As both are out-pollinators and suffer from marked inbreeding depression, linkage analysis has focussed on the use of a two way pseudo-test cross approach. A set of 172 microsatellite (SSR) loci derived from expressed sequence tag DNA sequence were integrated into the reference C. cardunculus genetic maps, based on segregation among the F1 progeny of a cross between a globe artichoke and a cultivated cardoon. The resulting maps each detected 17 major linkage groups, corresponding to the species' haploid chromosome number. A consensus map based on 66 co-dominant shared loci (64 SSRs and two SNPs) assembled 694 loci, with a mean inter-marker spacing of 2.5 cM. When the maps were used to elucidate the pattern of inheritance of head production earliness, a key commercial trait, seven regions were shown to harbour relevant quantitative trait loci (QTL). Together, these QTL accounted for up to 74% of the overall phenotypic variance. The newly developed consensus as well as the parental genetic maps can accelerate the process of tagging and eventually isolating the genes underlying earliness in both the domesticated C. cardunculus forms. The largest single effect mapped to the same linkage group in each parental maps, and explained about one half of the phenotypic variance, thus representing a good candidate for marker assisted selection.

  1. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  2. Stochastic perturbations in open chaotic systems: random versus noisy maps.

    Science.gov (United States)

    Bódai, Tamás; Altmann, Eduardo G; Endler, Antonio

    2013-04-01

    We investigate the effects of random perturbations on fully chaotic open systems. Perturbations can be applied to each trajectory independently (white noise) or simultaneously to all trajectories (random map). We compare these two scenarios by generalizing the theory of open chaotic systems and introducing a time-dependent conditionally-map-invariant measure. For the same perturbation strength we show that the escape rate of the random map is always larger than that of the noisy map. In random maps we show that the escape rate κ and dimensions D of the relevant fractal sets often depend nonmonotonically on the intensity of the random perturbation. We discuss the accuracy (bias) and precision (variance) of finite-size estimators of κ and D, and show that the improvement of the precision of the estimations with the number of trajectories N is extremely slow ([proportionality]1/lnN). We also argue that the finite-size D estimators are typically biased. General theoretical results are combined with analytical calculations and numerical simulations in area-preserving baker maps.

  3. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  7. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Apparent diffusion coefficient maps obtained from high b value diffusion-weighted imaging in the preoperative evaluation of gliomas at 3T: comparison with standard b value diffusion-weighted imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Qiang; Ling, Chenhan; Zhang, Jianmin [Second Affiliated Hospital of Zhejiang University School of Medicine, Department of Neurosurgery, Hangzhou, Zhejiang (China); Dong, Fei; Jiang, Biao [Second Affiliated Hospital of Zhejiang University School of Medicine, Department of Radiology, Hangzhou, Zhejiang (China); Shi, Feina [Second Affiliated Hospital of Zhejiang University School of Medicine, Department of Neurology, Hangzhou, Zhejiang (China)

    2017-12-15

    To assess whether ADC maps obtained from high b value DWI were more valuable in preoperatively evaluating the grade, Ki-67 index and outcome of gliomas. Sixty-three patients with gliomas, who underwent preoperative multi b value DWI at 3 T, were enrolled. The ADC{sub 1000}, ADC{sub 2000} and ADC{sub 3000} maps were generated. Receiver operating characteristic analyses were conducted to determine the area under the curve (AUC) in differentiating high-grade gliomas (HGG) from low-grade gliomas (LGG). Pearson correlation coefficients (R value) were calculated to investigate the correlation between parameters with the Ki-67 proliferation index. Survival analysis was conducted by using Cox regression. The AUC of the mean ADC{sub 1000} value (0.820) was lower than that of the mean ADC{sub 2000} value (0.847) and mean ADC{sub 3000} value (0.875) in differentiating HGG from LGG. The R value of the mean ADC{sub 1000} value (-0.499) was less negative than that of the mean ADC{sub 2000} value (-0.530) and mean ADC{sub 3000} value (-0.567). The mean ADC{sub 3000} value was an independent prognosis factor for gliomas (p = 0.008), while the mean ADC{sub 1000} and ADC{sub 2000} values were not. ADC maps obtained from high b value DWI might be a better imaging biomarker in the preoperative evaluation of gliomas. (orig.)

  9. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    Science.gov (United States)

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  10. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  11. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  12. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  13. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  14. Depressive status explains a significant amount of the variance in COPD assessment test (CAT) scores.

    Science.gov (United States)

    Miravitlles, Marc; Molina, Jesús; Quintano, José Antonio; Campuzano, Anna; Pérez, Joselín; Roncero, Carlos

    2018-01-01

    COPD assessment test (CAT) is a short, easy-to-complete health status tool that has been incorporated into the multidimensional assessment of COPD in order to guide therapy; therefore, it is important to understand the factors determining CAT scores. This is a post hoc analysis of a cross-sectional, observational study conducted in respiratory medicine departments and primary care centers in Spain with the aim of identifying the factors determining CAT scores, focusing particularly on the cognitive status measured by the Mini-Mental State Examination (MMSE) and levels of depression measured by the short Beck Depression Inventory (BDI). A total of 684 COPD patients were analyzed; 84.1% were men, the mean age of patients was 68.7 years, and the mean forced expiratory volume in 1 second (%) was 55.1%. Mean CAT score was 21.8. CAT scores correlated with the MMSE score (Pearson's coefficient r =-0.371) and the BDI ( r =0.620), both p CAT scores and explained 45% of the variability. However, a model including only MMSE and BDI scores explained up to 40% and BDI alone explained 38% of the CAT variance. CAT scores are associated with clinical variables of severity of COPD. However, cognitive status and, in particular, the level of depression explain a larger percentage of the variance in the CAT scores than the usual COPD clinical severity variables.

  15. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  16. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  17. Discharge Coefficient of Rectangular Short-Crested Weir with Varying Slope Coefficients

    Directory of Open Access Journals (Sweden)

    Yuejun Chen

    2018-02-01

    Full Text Available Rectangular short-crested weirs are widely used for simple structure and high discharge capacity. As one of the most important and influential factors of discharge capacity, side slope can improve the hydraulic characteristics of weirs at special conditions. In order to systemically study the effects of upstream and downstream slope coefficients S1 and S2 on overflow discharge coefficient in a rectangular short-crested weir the Volume of Fluid (VOF method and the Renormalization Group (RNG κ-ε turbulence model are used. In this study, the slope coefficient ranges from V to 3H:1V and each model corresponds to five total energy heads of H0 ranging from 8.0 to 24.0 cm. Comparisons of discharge coefficients and free surface profiles between simulated and laboratory results display a good agreement. The simulated results show that the difference of discharge coefficients will decrease with upstream slopes and increase with downstream slopes as H0 increases. For a given H0, the discharge coefficient has a convex parabolic relation with S1 and a piecewise linearity relation with S2. The maximum discharge coefficient is always obtained at S2 = 0.8. There exists a difference between upstream and downstream slope coefficients in the influence range of free surface curvatures. Furthermore, a proposed discharge coefficient equation by nonlinear regression is a function of upstream and downstream slope coefficients.

  18. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  19. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  20. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  1. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  2. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  3. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  4. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  5. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  6. Bayesian estimation of the discrete coefficient of determination.

    Science.gov (United States)

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  7. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  8. The noise analysis and the BWR operation map

    International Nuclear Information System (INIS)

    Blazquez, J.; Ballestrin, J.

    1996-01-01

    An analytical expression for the Decay Ratio is obtained: DR = exp(-bW / P 1/2 ). The physics behind is also explained. It applies to a commercial BWR Operation Map, on the vicinity of the power instability. This functional form seems fitting to the structure of the Operation map. The power P and the coolant flow are measured straightforward; the Decay Ratio is obtained by neutron noise analysis techniques. The parameter b, depending on the void reactivity coefficient, is then calculated on line during the Reactor Operation. New DR value is now predicted for each new displacement on the Map, so unexpected instability events are more likely avoided. (authors)

  9. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  10. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  11. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Directory of Open Access Journals (Sweden)

    Xudong Zhang

    Full Text Available The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  12. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Science.gov (United States)

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  13. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  14. High-resolution global grids of revised Priestley-Taylor and Hargreaves-Samani coefficients for assessing ASCE-standardized reference crop evapotranspiration and solar radiation

    Science.gov (United States)

    Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna

    2017-08-01

    The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P

  15. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  16. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  17. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  18. Mapping QTL for Omega-3 Content in Hybrid Saline Tilapia.

    Science.gov (United States)

    Lin, Grace; Wang, Le; Ngoh, Si Te; Ji, Lianghui; Orbán, Laszlo; Yue, Gen Hua

    2018-02-01

    Tilapia is one of most important foodfish species. The low omega-3 to omega-6 fatty acid ratio in freshwater tilapia meat is disadvantageous for human health. Increasing omega-3 content is an important task in breeding to increase the nutritional value of tilapia. However, conventional breeding to increase omega-3 content is difficult and slow. To accelerate the increase of omega-3 through marker-assisted selection (MAS), we conducted QTL mapping for fatty acid contents and profiles in a F 2 family of saline tilapia generated by crossing red tilapia and Mozambique tilapia. The total omega-3 content in F 2 hybrid tilapia was 2.5 ± 1.0 mg/g, higher than that (2.00 mg/g) in freshwater tilapia. Genotyping by sequencing (GBS) technology was used to discover and genotype SNP markers, and microsatellites were also genotyped. We constructed a linkage map with 784 markers (151 microsatellites and 633 SNPs). The linkage map was 2076.7 cM long and consisted of 22 linkage groups. Significant and suggestive QTL for total lipid content were mapped on six linkage groups (LG3, -4, -6, -8, -13, and -15) and explained 5.8-8.3% of the phenotypic variance. QTL for omega-3 fatty acids were located on four LGs (LG11, -18, -19, and -20) and explained 5.0 to 7.5% of the phenotypic variance. Our data suggest that the total lipid and omega-3 fatty acid content were determined by multiple genes in tilapia. The markers flanking the QTL for omega-3 fatty acids can be used in MAS to accelerate the genetic improvements of these traits in salt-tolerant tilapia.

  19. On local and global aspects of the 1:4 resonance in the conservative cubic Hénon maps

    Science.gov (United States)

    Gonchenko, M.; Gonchenko, S. V.; Ovsyannikov, I.; Vieiro, A.

    2018-04-01

    We study the 1:4 resonance for the conservative cubic Hénon maps C± with positive and negative cubic terms. These maps show up different bifurcation structures both for fixed points with eigenvalues ±i and for 4-periodic orbits. While for C-, the 1:4 resonance unfolding has the so-called Arnold degeneracy [the first Birkhoff twist coefficient equals (in absolute value) to the first resonant term coefficient], the map C+ has a different type of degeneracy because the resonant term can vanish. In the last case, non-symmetric points are created and destroyed at pitchfork bifurcations and, as a result of global bifurcations, the 1:4 resonant chain of islands rotates by π/4. For both maps, several bifurcations are detected and illustrated.

  20. Circle Maps and C*-algebras

    DEFF Research Database (Denmark)

    Schmidt, Thomas Lundsgaard

    such a map, generalising the transformation groupoid of a local homeomorphism first introduced by Renault in \\cite{re}. We conduct a detailed study of the relationship between the dynamics of $\\phi$, the properties of these groupoids, the structure of their corresponding reduced groupoid $C^*$-algebras, and......, for certain classes of maps, the K-theory of these algebras. When the map $\\phi$ is transitive, we show that the algebras $C^*_r(\\Gamma_\\phi)$ and $C^*_r(\\Gamma_\\phi^+)$ are purely infinite and satisfy the Universal Coefficient Theorem. Furthermore, we find necessary and sufficient conditions for simplicity...... of these algebras in terms of dynamical properties of $\\phi$. We proceed to consider the situation when the algebras are non-simple, and describe the primitive ideal spectrum in this case. We prove that any irreducible representation factors through the $C^*$-algebra of the reduction of the groupoid to the orbit...

  1. Histogram analysis of apparent diffusion coefficient maps for the differentiation between lymphoma and metastatic lymph nodes of squamous cell carcinoma in head and neck region.

    Science.gov (United States)

    Wang, Yan-Jun; Xu, Xiao-Quan; Hu, Hao; Su, Guo-Yi; Shen, Jie; Shi, Hai-Bin; Wu, Fei-Yun

    2018-06-01

    Background To clarify the nature of cervical malignant lymphadenopathy is highly important for the diagnosis and differential diagnosis of head and neck tumors. Purpose To investigate the role of first-order apparent diffusion coefficient (ADC) histogram analysis for differentiating lymphoma from metastatic lymph nodes of squamous cell carcinoma (SCC) in the head and neck region. Material and Methods Diffusion-weighted imaging (DWI) data of 67 patients (lymphoma, n = 20; SCC, n = 47) with malignant lymphadenopathy were retrospectively analyzed. The SCC group was divided into nasopharyngeal SCC and non-nasopharyngeal SCC groups. The ADC histogram features (ADC 10 , ADC 25 , ADC mean , ADC median , ADC 75 , ADC 90 , skewness, and kurtosis) were derived and then compared by independent-samples t-test and one-way analysis of variance test, respectively. Receiver operating characteristic curve analyses were employed to investigate diagnostic performance of the significant parameters. Results Lymphoma showed significantly lower ADC mean , ADC median , ADC 75 , and ADC 90 than SCC (all P  0.05). Lymphoma showed significantly lower ADC 25 , ADC mean , ADC median , ADC 75 , and ADC 90 than non-nasopharyngeal SCC (all P histogram analysis is capable of differentiating lymphoma from metastatic lymph nodes of SCC, especially those of non-nasopharyngeal SCC.

  2. Covariance and correlation estimation in electron-density maps.

    Science.gov (United States)

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  3. Standard symmetric operators in Pontryagin spaces : a generalized von Neumann formula and minimality of boundary coefficients

    NARCIS (Netherlands)

    Azizov, Tomas; Ćurgus, Branko; Dijksma, Aad

    2003-01-01

    Certain meromorphic matrix valued functions on C\\R, the so-called boundary coefficients, are characterized in terms of a standard symmetric operator S in a Pontryagin space with finite (not necessarily equal) defect numbers, a meromorphic mapping into the defect subspaces of S, and a boundary

  4. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  5. Stability Analysis of Periodic Systems by Truncated Point Mappings

    Science.gov (United States)

    Guttalu, R. S.; Flashner, H.

    1996-01-01

    An approach is presented deriving analytical stability and bifurcation conditions for systems with periodically varying coefficients. The method is based on a point mapping(period to period mapping) representation of the system's dynamics. An algorithm is employed to obtain an analytical expression for the point mapping and its dependence on the system's parameters. The algorithm is devised to derive the coefficients of a multinominal expansion of the point mapping up to an arbitrary order in terms of the state variables and of the parameters. Analytical stability and bifurcation condition are then formulated and expressed as functional relations between the parameters. To demonstrate the application of the method, the parametric stability of Mathieu's equation and of a two-degree of freedom system are investigated. The results obtained by the proposed approach are compared to those obtained by perturbation analysis and by direct integration which we considered to the "exact solution". It is shown that, unlike perturbation analysis, the proposed method provides very accurate solution even for large valuesof the parameters. If an expansion of the point mapping in terms of a small parameter is performed the method is equivalent to perturbation analysis. Moreover, it is demonstrated that the method can be easily applied to multiple-degree-of-freedom systems using the same framework. This feature is an important advantage since most of the existing analysis methods apply mainly to single-degree-of-freedom systems and their extension to higher dimensions is difficult and computationally cumbersome.

  6. Accelerated whole-brain multi-parameter mapping using blind compressed sensing.

    Science.gov (United States)

    Bhave, Sampada; Lingala, Sajan Goud; Johnson, Casey P; Magnotta, Vincent A; Jacob, Mathews

    2016-03-01

    To introduce a blind compressed sensing (BCS) framework to accelerate multi-parameter MR mapping, and demonstrate its feasibility in high-resolution, whole-brain T1ρ and T2 mapping. BCS models the evolution of magnetization at every pixel as a sparse linear combination of bases in a dictionary. Unlike compressed sensing, the dictionary and the sparse coefficients are jointly estimated from undersampled data. Large number of non-orthogonal bases in BCS accounts for more complex signals than low rank representations. The low degree of freedom of BCS, attributed to sparse coefficients, translates to fewer artifacts at high acceleration factors (R). From 2D retrospective undersampling experiments, the mean square errors in T1ρ and T2 maps were observed to be within 0.1% up to R = 10. BCS was observed to be more robust to patient-specific motion as compared to other compressed sensing schemes and resulted in minimal degradation of parameter maps in the presence of motion. Our results suggested that BCS can provide an acceleration factor of 8 in prospective 3D imaging with reasonable reconstructions. BCS considerably reduces scan time for multiparameter mapping of the whole brain with minimal artifacts, and is more robust to motion-induced signal changes compared to current compressed sensing and principal component analysis-based techniques. © 2015 Wiley Periodicals, Inc.

  7. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  8. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  9. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  10. A comparison of roughness parameters and friction coefficients of aesthetic archwires.

    Science.gov (United States)

    Rudge, Philippa; Sherriff, Martyn; Bister, Dirk

    2015-02-01

    Compare surface roughness of 'aesthetic' nickel-titanium (NiTi) archwires with their dynamic frictional properties. Archwires investigated were: four fully coated tooth coloured [Forestadent: Biocosmetic (FB) and Titanol Cosmetic (FT); TOC Tooth Tone (TT); and Hawley Russell Coated Superelastic NiTi (HRC)]; two partially coated tooth coloured [DB Euroline Microcoated (DB) and TP Aesthetic NiTi (TP)]; two rhodium coated [TOC Sentalloy (TS) and Hawley Russell Rhodium Coated Superelastic NiTi (HRR)]; and two controls: stainless steel [Forestadent Steel (FS)] and NiTi archwire [Forestadent Titanol Superelastic (FN)]. Surface roughness [profilometry (Rugosurf)] was compared with frictional coefficients for archwire/bracket/ligature combinations (n = 10). Analysis of variance, Sidak's multiple comparison of means, and Spearman's correlation coefficient were used for analysis. Roughness coefficients were from low to high: FB; FN; TT; FS; TS; HRR; FT; DB; TP; HRC. Friction coefficients were from low to high: TP; FS; FN; HRR; FT; DB; FB; HRC; TS; TT. Coated archwires generally exhibited higher friction than uncoated controls. TP had the lowest friction but this was not statistically significant (P < 0.05). Friction of tooth coloured coated archwires were significantly different for some wires. Spearman's correlation did not demonstrate consistency between surface roughness (R a) and dynamic friction. Aesthetic archwires investigated had either low surface roughness or low frictional resistance but not both properties simultaneously. Causes for friction are likely to be multifactorial and do not appear to be solely determined by surface roughness (measured by profilometry). For selecting the most appropriate aligning archwire, both surface roughness and frictional resistance need to be considered. © The Author 2014. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Estimation of Inbreeding Coefficient and Its Effects on Lamb Survival in Sheep

    Directory of Open Access Journals (Sweden)

    mohammad almasi

    2016-04-01

    Full Text Available Introduction The mating of related individuals produces an inbred offspring and leads to an increased homozygosity in the progeny, genetic variance decrease within families and increase between families. The ration of homozygosity for individuals was calculated by inbreeding coefficient. Inbred individuals may carry two alleles at a locus that are replicated from one gene in the previous generations, called identical by descent. The inbreeding coefficient should be monitored in a breeding program, since it plays an important role at decreasing of homeostasis, performance, reproduction and viability. The trend of inbreeding is an indicator for determining of inbreeding level in the herd. Inbreeding affects both phenotypic means of traits and genetic variances within population, thus it is an important factor for delimitations of genetic progress in a population. Reports showed an inbreeding increase led to decrease of phenotypic value in some of the productive and reproductive traits. Materials and Methods In the current study, the pedigree data of 14030 and 6215 records of Baluchi and Iranblack lambs that collected from 1984 to 2011 at the Abbasabad Sheep Breeding Station in Mashhad, Iran, 3588 records of Makoei lambs that collected from 1994 to 2011 at the Makoei sheep breeding station and 6140, records of Zandi lambs that collected from 1991 to 2011 at the Khejir Sheep Breeding Station in Tehran, Iran were used to estimating the inbreeding coefficient and its effects on lamb survival in these breeds. Lamb survival trait was scored as 1 and 0 for lamb surviving and not surviving at weaning weight, respectively. Inbreeding coefficient was estimated by relationship matrix algorithm (A=TDT' methodology using the CFC software program. Effects of inbreeding coefficient on lamb survival were estimated by restricted maximum likelihood (REML method under 12 different animal models using ASReml 3.0 computer programme. Coefficient of inbreeding for each

  12. Estimation of Esfarayen Farmers Risk Aversion Coefficient and Its Influencing Factors (Nonparametric Approach

    Directory of Open Access Journals (Sweden)

    Z. Nematollahi

    2016-03-01

    Full Text Available Introduction: Due to existence of the risk and uncertainty in agriculture, risk management is crucial for management in agriculture. Therefore the present study was designed to determine the risk aversion coefficient for Esfarayens farmers. Materials and Methods: The following approaches have been utilized to assess risk attitudes: (1 direct elicitation of utility functions, (2 experimental procedures in which individuals are presented with hypothetical questionnaires regarding risky alternatives with or without real payments and (3: Inference from observation of economic behavior. In this paper, we focused on approach (3: inference from observation of economic behavior, based on this assumption of existence of the relationship between the actual behavior of a decision maker and the behavior predicted from empirically specified models. A new non-parametric method and the QP method were used to calculate the coefficient of risk aversion. We maximized the decision maker expected utility with the E-V formulation (Freund, 1956. Ideally, in constructing a QP model, the variance-covariance matrix should be formed for each individual farmer. For this purpose, a sample of 100 farmers was selected using random sampling and their data about 14 products of years 2008- 2012 were assembled. The lowlands of Esfarayen were used since within this area, production possibilities are rather homogeneous. Results and Discussion: The results of this study showed that there was low correlation between some of the activities, which implies opportunities for income stabilization through diversification. With respect to transitory income, Ra, vary from 0.000006 to 0.000361 and the absolute coefficient of risk aversion in our sample were 0.00005. The estimated Ra values vary considerably from farm to farm. The results showed that the estimated Ra for the subsample existing of 'non-wealthy' farmers was 0.00010. The subsample with farmers in the 'wealthy' group had an

  13. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  14. QTL MAPPING FOR GRAIN QUALITY TRAITS IN TESTCROSSES OF A MAIZE BIPARENTAL POPULATION USING GENOTYPING-BY-SEQUENCING DATA

    Directory of Open Access Journals (Sweden)

    Mario Franić

    2017-01-01

    Full Text Available We performed QTL mapping in testcrosses of maize population IBMSyn4 for three grain quality traits: oil and protein contents and test weight. 191 phenotyped and genotyped lines were used as a training set while 85 genotyped only lines comprised a validation set used to calculate best linear unbiased predictions (BLUP, making a total of 276 phenotypes for the QTL analysis. 92000 filtered Genotyping-By-Sequencing (GBS SNP markers were used to calculate BLUPs, while a set of 2178 genetically mapped SSRs was used in QTL analysis. By simple QTL scan, we scored several minor effect QTLs: one for oil content (chromosome 1, one for protein content (chromosome 10 and four for test weight (chromosomes 1, 3, 5 and 10. QTLs associated with test weight were found to be additive, and 18.25% of phenotypic variance was explained by their joint effect. Only one QTL for test weight was found to be significant in composite interval mapping and it was mapped on chromosome 5. This QTL accounted for 9.97% of phenotypic variance. QTLs detected in this study represent monitoring of commercially most successful elite maize germplasm for grain quality traits.

  15. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  16. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  17. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  18. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  19. Negative Correlation between the Diffusion Coefficient and Transcriptional Activity of the Glucocorticoid Receptor.

    Science.gov (United States)

    Mikuni, Shintaro; Yamamoto, Johtaro; Horio, Takashi; Kinjo, Masataka

    2017-08-25

    The glucocorticoid receptor (GR) is a transcription factor, which interacts with DNA and other cofactors to regulate gene transcription. Binding to other partners in the cell nucleus alters the diffusion properties of GR. Raster image correlation spectroscopy (RICS) was applied to quantitatively characterize the diffusion properties of EGFP labeled human GR (EGFP-hGR) and its mutants in the cell nucleus. RICS is an image correlation technique that evaluates the spatial distribution of the diffusion coefficient as a diffusion map. Interestingly, we observed that the averaged diffusion coefficient of EGFP-hGR strongly and negatively correlated with its transcriptional activities in comparison to that of EGFP-hGR wild type and mutants with various transcriptional activities. This result suggests that the decreasing of the diffusion coefficient of hGR was reflected in the high-affinity binding to DNA. Moreover, the hyper-phosphorylation of hGR can enhance the transcriptional activity by reduction of the interaction between the hGR and the nuclear corepressors.

  20. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  1. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  2. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2014-01-01

    Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

  3. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  4. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    Science.gov (United States)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  5. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    International Nuclear Information System (INIS)

    Wuhrer, R; Moran, K

    2014-01-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper

  6. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  7. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  8. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  9. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  10. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  11. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  12. Computation of Clebsch-Gordan and Gaunt coefficients using binomial coefficients

    International Nuclear Information System (INIS)

    Guseinov, I.I.; Oezmen, A.; Atav, Ue

    1995-01-01

    Using binomial coefficients the Clebsch-Gordan and Gaunt coefficients were calculated for extremely large quantum numbers. The main advantage of this approach is directly calculating these coefficients, instead of using recursion relations. Accuracy of the results is quite high for quantum numbers l 1 , and l 2 up to 100. Despite direct calculation, the CPU times are found comparable with those given in the related literature. 11 refs., 1 fig., 2 tabs

  13. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  14. A landslide susceptibility map of Africa

    Science.gov (United States)

    Broeckx, Jente; Vanmaercke, Matthias; Duchateau, Rica; Poesen, Jean

    2017-04-01

    preparatory factor for landslides. This finding concurs with several other recent studies. Rainfall explains a significant, but limited part of the observed landslide pattern and becomes insignificant when also rockfalls are considered. This may be explained by the fact that a significant fraction of the mapped rockfalls occurred in the Sahara desert. Overall, both maps perform well in predicting intra-continental patterns of mass movements in Africa and explain about 80% of the observed variance in landslide occurrence. As a result, these maps may be a valuable tool for planning and risk reduction strategies.

  15. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  16. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  17. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  18. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  19. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  20. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  1. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  2. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  3. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  4. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  5. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  6. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    Energy Technology Data Exchange (ETDEWEB)

    Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M. [Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, Davis, California, 95616 (United States)

    2010-07-15

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.

  7. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  8. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  9. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  10. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  11. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  12. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  13. A novel coefficient for detecting and quantifying asymmetry of California electricity market based on asymmetric detrended cross-correlation analysis.

    Science.gov (United States)

    Wang, Fang

    2016-06-01

    In order to detect and quantify asymmetry of two time series, a novel cross-correlation coefficient is proposed based on recent asymmetric detrended cross-correlation analysis (A-DXA), which we called A-DXA coefficient. The A-DXA coefficient, as an important extension of DXA coefficient ρDXA, contains two directional asymmetric cross-correlated indexes, describing upwards and downwards asymmetric cross-correlations, respectively. By using the information of directional covariance function of two time series and directional variance function of each series itself instead of power-law between the covariance function and time scale, the proposed A-DXA coefficient can well detect asymmetry between the two series no matter whether the cross-correlation is significant or not. By means of the proposed A-DXA coefficient conducted over the asymmetry for California electricity market, we found that the asymmetry between the prices and loads is not significant for daily average data in 1999 yr market (before electricity crisis) but extremely significant for those in 2000 yr market (during the crisis). To further uncover the difference of asymmetry between the years 1999 and 2000, a modified H statistic (MH) and ΔMH statistic are proposed. One of the present contributions is that the high MH values calculated for hourly data exist in majority months in 2000 market. Another important conclusion is that the cross-correlation with downwards dominates over the whole 1999 yr in contrast to the cross-correlation with upwards dominates over the 2000 yr.

  14. A novel coefficient for detecting and quantifying asymmetry of California electricity market based on asymmetric detrended cross-correlation analysis

    Science.gov (United States)

    Wang, Fang

    2016-06-01

    In order to detect and quantify asymmetry of two time series, a novel cross-correlation coefficient is proposed based on recent asymmetric detrended cross-correlation analysis (A-DXA), which we called A-DXA coefficient. The A-DXA coefficient, as an important extension of DXA coefficient ρ D X A , contains two directional asymmetric cross-correlated indexes, describing upwards and downwards asymmetric cross-correlations, respectively. By using the information of directional covariance function of two time series and directional variance function of each series itself instead of power-law between the covariance function and time scale, the proposed A-DXA coefficient can well detect asymmetry between the two series no matter whether the cross-correlation is significant or not. By means of the proposed A-DXA coefficient conducted over the asymmetry for California electricity market, we found that the asymmetry between the prices and loads is not significant for daily average data in 1999 yr market (before electricity crisis) but extremely significant for those in 2000 yr market (during the crisis). To further uncover the difference of asymmetry between the years 1999 and 2000, a modified H statistic (MH) and ΔMH statistic are proposed. One of the present contributions is that the high MH values calculated for hourly data exist in majority months in 2000 market. Another important conclusion is that the cross-correlation with downwards dominates over the whole 1999 yr in contrast to the cross-correlation with upwards dominates over the 2000 yr.

  15. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  16. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  17. Correlation of spatial climate/weather maps and the advantages of using the Mahalanobis metric in predictions

    OpenAIRE

    Stephenson, D. B.

    2011-01-01

    he skill in predicting spatially varying weather/climate maps depends on the definition of the measure of similarity between the maps. Under the justifiable approximation that the anomaly maps are distributed multinormally, it is shown analytically that the choice of weighting metric, used in defining the anomaly correlation between spatial maps, can change the resulting probability distribution of the correlation coefficient. The estimate of the numbers of degrees of freedom based on the var...

  18. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  19. Overcoming multicollinearity in multiple regression using correlation coefficient

    Science.gov (United States)

    Zainodin, H. J.; Yap, S. J.

    2013-09-01

    Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.

  20. Simultaneous epicardial and noncontact endocardial mapping of the canine right atrium: simulation and experiment.

    Science.gov (United States)

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.

  1. Simultaneous epicardial and noncontact endocardial mapping of the canine right atrium: simulation and experiment.

    Directory of Open Access Journals (Sweden)

    Sepideh Sabouri

    Full Text Available Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes, noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter, and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression, activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa, a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments and 0.96 (simulation between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments and 0.92 (simulation between ATa values. Despite distance (balloon-atrial wall and dimension reduction (64 electrodes, some information about atrial repolarization remained present in noncontact signals.

  2. Simultaneous Epicardial and Noncontact Endocardial Mapping of the Canine Right Atrium: Simulation and Experiment

    Science.gov (United States)

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778

  3. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  4. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  5. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  6. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  7. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  8. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  9. Estimating variability in placido-based topographic systems.

    Science.gov (United States)

    Kounis, George A; Tsilimbaris, Miltiadis K; Kymionis, George D; Ginis, Harilaos S; Pallikaris, Ioannis G

    2007-10-01

    To describe a new software tool for the detailed presentation of corneal topography measurements variability by means of color-coded maps. Software was developed in Visual Basic to analyze and process a series of 10 consecutive measurements obtained by a topographic system on calibration spheres, and individuals with emmetropic, low, high, and irregular astigmatic corneas. Corneal surface was segmented into 1200 segments and the coefficient of variance of each segment's keratometric dioptric power was used as the measure of variability. The results were presented graphically in color-coded maps (Variability Maps). Two topographic systems, the TechnoMed C-Scan and the TOMEY Topographic Modeling System (TMS-2N), were examined to demonstrate our method. Graphic representation of coefficient of variance offered a detailed representation of examination variability both in calibration surfaces and human corneas. It was easy to recognize an increase in variability, as the irregularity of examination surfaces increased. In individuals with high and irregular astigmatism, a variability pattern correlated with the pattern of corneal topography: steeper corneal areas possessed higher variability values compared with flatter areas of the same cornea. Numerical data permitted direct comparisons and statistical analysis. We propose a method that permits a detailed evaluation of the variability of corneal topography measurements. The representation of the results both graphically and quantitatively improves interpretability and facilitates a spatial correlation of variability maps with original topography maps. Given the popularity of topography based custom refractive ablations of the cornea, it is possible that variability maps may assist clinicians in the evaluation of corneal topography maps of patients with very irregular corneas, before custom ablation procedures.

  10. Coefficient estimates of negative powers and inverse coefficients for ...

    Indian Academy of Sciences (India)

    and the inequality is sharp for the inverse of the Koebe function k(z) = z/(1 − z)2. An alternative approach to the inverse coefficient problem for functions in the class S has been investigated by Schaeffer and Spencer [27] and FitzGerald [6]. Although, the inverse coefficient problem for the class S has been completely solved ...

  11. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  12. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  13. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  14. CONAN—The cruncher of local exchange coefficients for strongly interacting confined systems in one dimension

    DEFF Research Database (Denmark)

    Loft, Niels Jakob Søe; Kristensen, Lasse Bjørn; Thomsen, Anders

    2016-01-01

    We consider a one-dimensional system of particles with strong zero-range interactions. This system can be mapped onto a spin chain of the Heisenberg type with exchange coefficients that depend on the external trap. In this paper, we present an algorithm that can be used to compute these exchange...... coefficients. We introduce an open source code CONAN (Coefficients of One-dimensional N-Atom Networks) which is based on this algorithm. CONAN works with arbitrary external potentials and we have tested its reliability for system sizes up to around 35 particles. As illustrative examples, we consider a harmonic...... trap and a box trap with a superimposed asymmetric tilted potential. For these examples, the computation time typically scales with the number of particles as O(N3.5±0.4). Computation times are around 10 s for N=10 particles and less than 10 min for N=20 particles....

  15. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  16. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  17. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI

    International Nuclear Information System (INIS)

    Viehweger, Adrian; Sorge, Ina; Hirsch, Wolfgang; Riffert, Till; Dhital, Bibek; Knoesche, Thomas R.; Anwander, Alfred; Stepan, Holger

    2014-01-01

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm 2 . Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R 2 = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state. (orig.)

  18. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  19. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  20. Weed Mapping with Co-Kriging Using Soil Properties

    DEFF Research Database (Denmark)

    Heisel, Torben; Ersbøll, Annette Kjær; Andreasen, Christian

    1999-01-01

    Our aim is to build reliable weed maps to control weeds in patches. Weed sampling is time consuming but there are some shortcuts. If an intensively sampled variable (e.g. soil property) can be used to improve estimation of a sparsely sampled variable (e.g. weed distribution), one can reduce weed...... sampling. The geostatistical estimation method co-kriging uses two or more sampled variables, which are correlated, to improve the estimation of one of the variables at locations where it was not sampled. We did an experiment on a 2.1 ha winter wheat field to compare co-kriging using soil properties......, with kriging based only on one variable. The results showed that co-kriging Lamium spp. from 96 0.25m2 sample plots ha-1 with silt content improved the prediction variance by 11% compared to kriging. With 51 or 18 sample plots ha-1 the prediction variance was improved by 21 and 15%....

  1. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  2. Generalized double-humped logistic map-based medical image encryption

    Directory of Open Access Journals (Sweden)

    Samar M. Ismail

    2018-03-01

    Full Text Available This paper presents the design of the generalized Double Humped (DH logistic map, used for pseudo-random number key generation (PRNG. The generalized parameter added to the map provides more control on the map chaotic range. A new special map with a zooming effect of the bifurcation diagram is obtained by manipulating the generalization parameter value. The dynamic behavior of the generalized map is analyzed, including the study of the fixed points and stability ranges, Lyapunov exponent, and the complete bifurcation diagram. The option of designing any specific map is made possible through changing the general parameter increasing the randomness and controllability of the map. An image encryption algorithm is introduced based on pseudo-random sequence generation using the proposed generalized DH map offering secure communication transfer of medical MRI and X-ray images. Security analyses are carried out to consolidate system efficiency including: key sensitivity and key-space analyses, histogram analysis, correlation coefficients, MAE, NPCR and UACI calculations. System robustness against noise attacks has been proved along with the NIST test ensuring the system efficiency. A comparison between the proposed system with respect to previous works is presented.

  3. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    Science.gov (United States)

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  4. Converting Sabine absorption coefficients to random incidence absorption coefficients

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2013-01-01

    are suggested: An optimization method for the surface impedances for locally reacting absorbers, the flow resistivity for extendedly reacting absorbers, and the flow resistance for fabrics. With four porous type absorbers, the conversion methods are validated. For absorbers backed by a rigid wall, the surface...... coefficients to random incidence absorption coefficients are proposed. The overestimations of the Sabine absorption coefficient are investigated theoretically based on Miki's model for porous absorbers backed by a rigid wall or an air cavity, resulting in conversion factors. Additionally, three optimizations...... impedance optimization produces the best results, while the flow resistivity optimization also yields reasonable results. The flow resistivity and flow resistance optimization for extendedly reacting absorbers are also found to be successful. However, the theoretical conversion factors based on Miki's model...

  5. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  6. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  7. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  8. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  9. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  10. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  11. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  12. Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe

    Science.gov (United States)

    Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David

    2018-06-01

    An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.

  13. The correlation of the results of capacitance mapping and of sheet resistance mapping in semi-insulating 6H-SiC

    International Nuclear Information System (INIS)

    Lin Shenghuang; Chen Zhiming; Liang Peng; Jiang Dong; Xie Huajie; Yang Ying

    2010-01-01

    A combination of complex surface capacitance mapping and sheet resistance mapping is applied to establish the origin of resistance variations on semi-insulating (SI) 6H-SiC substrates. The direct correlation between the capacitance quadrature and the sheet resistance is found in vanadium-doped SI samples. Regions with low capacitance quadrature show high sheet resistance. This indicates, associated with the nonhomogeneity of sheet resistance on the substrate, that the quality of crystallization is not good enough, which also leads to resistivity nonhomogeneity when comparing with different types of deep defects. According to the capacitance mapping, the region with bad crystallization quality has a high radio absorption coefficient. Another correlation is established between the capacitance in-phase and sheet resistance for the vanadium-doped sample. In this sample, the capacitance in-phase map shows not only the surface topography, but also the same distribution trend as the sheet resistance, namely, regions of high capacitance in-phase reveal high sheet resistance.

  14. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  15. Coefficient Alpha: A Reliability Coefficient for the 21st Century?

    Science.gov (United States)

    Yang, Yanyun; Green, Samuel B.

    2011-01-01

    Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…

  16. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  17. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  18. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  19. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  20. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  1. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  2. Ulnar variance: its relationship to ulnar foveal morphology and forearm kinematics.

    Science.gov (United States)

    Kataoka, Toshiyuki; Moritomo, Hisao; Omokawa, Shohei; Iida, Akio; Murase, Tsuyoshi; Sugamoto, Kazuomi

    2012-04-01

    It is unclear how individual differences in the anatomy of the distal ulna affect kinematics and pathology of the distal radioulnar joint. This study evaluated how ulnar variance relates to ulnar foveal morphology and the pronosupination axis of the forearm. We performed 3-dimensional computed tomography studies in vivo on 28 forearms in maximum supination and pronation to determine the anatomical center of the ulnar distal pole and the forearm pronosupination axis. We calculated the forearm pronosupination axis using a markerless bone registration technique, which determined the pronosupination center as the point where the axis emerges on the distal ulnar surface. We measured the depth of the anatomical center and classified it into 2 types: concave, with a depth of 0.8 mm or more, and flat, with a depth less than 0.8 mm. We examined whether ulnar variance correlated with foveal type and the distance between anatomical and pronosupination centers. A total of 18 cases had a concave-type fovea surrounded by the C-shaped articular facet of the distal pole, and 10 had a flat-type fovea with a flat surface without evident central depression. Ulnar variance of the flat type was 3.5 ± 1.2 mm, which was significantly greater than the 1.2 ± 1.1 mm of the concave type. Ulnar variance positively correlated with distance between the anatomical and pronosupination centers. Flat-type ulnar heads have a significantly greater ulnar variance than concave types. The pronosupination axis passes through the ulnar head more medially and farther from the anatomical center with increasing ulnar variance. This study suggests that ulnar variance is related in part to foveal morphology and pronosupination axis. This information provides a starting point for future studies investigating how foveal morphology relates to distal ulnar problems. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. Comprehensive QTL mapping survey dissects the complex fruit texture physiology in apple (Malus x domestica Borkh.).

    Science.gov (United States)

    Longhi, Sara; Moretto, Marco; Viola, Roberto; Velasco, Riccardo; Costa, Fabrizio

    2012-02-01

    Fruit ripening is a complex physiological process in plants whereby cell wall programmed changes occur mainly to promote seed dispersal. Cell wall modification also directly regulates the textural properties, a fundamental aspect of fruit quality. In this study, two full-sib populations of apple, with 'Fuji' as the common maternal parent, crossed with 'Delearly' and 'Pink Lady', were used to understand the control of fruit texture by QTL mapping and in silico gene mining. Texture was dissected with a novel high resolution phenomics strategy, simultaneously profiling both mechanical and acoustic fruit texture components. In 'Fuji × Delearly' nine linkage groups were associated with QTLs accounting from 15.6% to 49% of the total variance, and a highly significant QTL cluster for both textural components was mapped on chromosome 10 and co-located with Md-PG1, a polygalacturonase gene that, in apple, is known to be involved in cell wall metabolism processes. In addition, other candidate genes related to Md-NOR and Md-RIN transcription factors, Md-Pel (pectate lyase), and Md-ACS1 were mapped within statistical intervals. In 'Fuji × Pink Lady', a smaller set of linkage groups associated with the QTLs identified for fruit texture (15.9-34.6% variance) was observed. The analysis of the phenotypic variance over a two-dimensional PCA plot highlighted a transgressive segregation for this progeny, revealing two QTL sets distinctively related to both mechanical and acoustic texture components. The mining of the apple genome allowed the discovery of the gene inventory underlying each QTL, and functional profile assessment unravelled specific gene expression patterns of these candidate genes.

  4. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  5. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  6. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  7. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  8. Recurrence relations between transformation coefficients of hyperspherical harmonics and their application to Moshinsky coefficients

    International Nuclear Information System (INIS)

    Raynal, J.

    1976-01-01

    Closed formulae and recurrence relations for the transformation of a two-body harmonic oscillator wave function to the hyperspherical formalism are given. With them Moshinsky or Smirnov coefficients are obtained from the transformation coefficients of hyperspheric harmonics. For these coefficients the diagonalization method of Talman and Lande reduces to simple recurrence relations which can be used directly to compute them. New closed formulae for these coefficients are also derived: they are needed to compute the two simplest coefficients which determine the sign for the recurrence relation. (Auth.)

  9. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  10. Effects of attenuation map accuracy on attenuation-corrected micro-SPECT images

    NARCIS (Netherlands)

    Wu, C.; Gratama van Andel, H.A.; Laverman, P.; Boerman, O.C.; Beekman, F.J.

    2013-01-01

    Background In single-photon emission computed tomography (SPECT), attenuation of photon flux in tissue affects quantitative accuracy of reconstructed images. Attenuation maps derived from X-ray computed tomography (CT) can be employed for attenuation correction. The attenuation coefficients as well

  11. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  12. Mapping biomass for a northern forest ecosystem using multi-frequency SAR data

    Science.gov (United States)

    Ranson, K. J.; Sun, Guoqing

    1992-01-01

    Image processing methods for mapping standing biomass for a forest in Maine, using NASA/JPL airborne synthetic aperture radar (AIRSAR) polarimeter data, are presented. By examining the dependence of backscattering on standing biomass, it is determined that the ratio of HV backscattering from a longer wavelength (P- or L-band) to a shorter wavelength (C) is a good combination for mapping total biomass. This ratio enhances the correlation of the image signature to the standing biomass and compensates for a major part of the variations in backscattering attributed to radar incidence angle. The image processing methods used include image calibration, ratioing, filtering, and segmentation. The image segmentation algorithm uses both means and variances of the image, and it is combined with the image filtering process. Preliminary assessment of the resultant biomass maps suggests that this is a promising method.

  13. Dipole-magnet field models based on a conformal map

    Directory of Open Access Journals (Sweden)

    P. L. Walstrom

    2012-10-01

    Full Text Available In general, generation of charged-particle transfer maps for conventional iron-pole-piece dipole magnets to third and higher order requires a model for the midplane field profile and its transverse derivatives (soft-edge model to high order and numerical integration of map coefficients. An exact treatment of the problem for a particular magnet requires use of measured magnetic data. However, in initial design of beam transport systems, users of charged-particle optics codes generally rely on magnet models built into the codes. Indeed, if maps to third order are adequate for the problem, an approximate analytic field model together with numerical map coefficient integration can capture the important features of the transfer map. The model described in this paper is based on the fact that, except at very large distances from the magnet, the magnetic field for parallel pole-face magnets with constant pole gap height and wide pole faces is basically two dimensional (2D. The field for all space outside of the pole pieces is given by a single (complex analytic expression and includes a parameter that controls the rate of falloff of the fringe field. Since the field function is analytic in the complex plane outside of the pole pieces, it satisfies two basic requirements of a field model for higher-order map codes: it is infinitely differentiable at the midplane and also a solution of the Laplace equation. It is apparently the only simple model available that combines an exponential approach to the central field with an inverse cubic falloff of field at large distances from the magnet in a single expression. The model is not intended for detailed fitting of magnetic field data, but for use in numerical map-generating codes for studying the effect of extended fringe fields on higher-order transfer maps. It is based on conformally mapping the area between the pole pieces to the upper half plane, and placing current filaments on the pole faces. An

  14. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  15. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  16. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  17. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  18. A parameterization scheme for the x-ray linear attenuation coefficient and energy absorption coefficient.

    Science.gov (United States)

    Midgley, S M

    2004-01-21

    A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.

  19. The Gini coefficient: a methodological pilot study to assess fetal brain development employing postmortem diffusion MRI

    Energy Technology Data Exchange (ETDEWEB)

    Viehweger, Adrian; Sorge, Ina; Hirsch, Wolfgang [University Hospital Leipzig, Department of Pediatric Radiology, Leipzig (Germany); Riffert, Till; Dhital, Bibek; Knoesche, Thomas R.; Anwander, Alfred [Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig (Germany); Stepan, Holger [University Leipzig, Department of Obstetrics, Leipzig (Germany)

    2014-10-15

    Diffusion-weighted imaging (DWI) is important in the assessment of fetal brain development. However, it is clinically challenging and time-consuming to prepare neuromorphological examinations to assess real brain age and to detect abnormalities. To demonstrate that the Gini coefficient can be a simple, intuitive parameter for modelling fetal brain development. Postmortem fetal specimens(n = 28) were evaluated by diffusion-weighted imaging (DWI) on a 3-T MRI scanner using 60 directions, 0.7-mm isotropic voxels and b-values of 0, 150, 1,600 s/mm{sup 2}. Constrained spherical deconvolution (CSD) was used as the local diffusion model. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) and complexity (CX) maps were generated. CX was defined as a novel diffusion metric. On the basis of those three parameters, the Gini coefficient was calculated. Study of fetal brain development in postmortem specimens was feasible using DWI. The Gini coefficient could be calculated for the combination of the three diffusion parameters. This multidimensional Gini coefficient correlated well with age (Adjusted R{sup 2} = 0.59) between the ages of 17 and 26 gestational weeks. We propose a new method that uses an economics concept, the Gini coefficient, to describe the whole brain with one simple and intuitive measure, which can be used to assess the brain's developmental state. (orig.)

  20. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  1. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  2. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  3. Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation

    Science.gov (United States)

    Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty

    2017-09-01

    In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.

  4. Comparative physical mapping between wheat chromosome arm 2BL and rice chromosome 4.

    Science.gov (United States)

    Lee, Tong Geon; Lee, Yong Jin; Kim, Dae Yeon; Seo, Yong Weon

    2010-12-01

    Physical maps of chromosomes provide a framework for organizing and integrating diverse genetic information. DNA microarrays are a valuable technique for physical mapping and can also be used to facilitate the discovery of single feature polymorphisms (SFPs). Wheat chromosome arm 2BL was physically mapped using a Wheat Genome Array onto near-isogenic lines (NILs) with the aid of wheat-rice synteny and mapped wheat EST information. Using high variance probe set (HVP) analysis, 314 HVPs constituting genes present on 2BL were identified. The 314 HVPs were grouped into 3 categories: HVPs that match only rice chromosome 4 (298 HVPs), those that match only wheat ESTs mapped on 2BL (1), and those that match both rice chromosome 4 and wheat ESTs mapped on 2BL (15). All HVPs were converted into gene sets, which represented either unique rice gene models or mapped wheat ESTs that matched identified HVPs. Comparative physical maps were constructed for 16 wheat gene sets and 271 rice gene sets. Of the 271 rice gene sets, 257 were mapped to the 18-35 Mb regions on rice chromosome 4. Based on HVP analysis and sequence similarity between the gene models in the rice chromosomes and mapped wheat ESTs, the outermost rice gene model that limits the translocation breakpoint to orthologous regions was identified.

  5. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  6. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  7. ADC histogram analysis for adrenal tumor histogram analysis of apparent diffusion coefficient in differentiating adrenal adenoma from pheochromocytoma.

    Science.gov (United States)

    Umanodan, Tomokazu; Fukukura, Yoshihiko; Kumagae, Yuichi; Shindo, Toshikazu; Nakajo, Masatoyo; Takumi, Koji; Nakajo, Masanori; Hakamada, Hiroto; Umanodan, Aya; Yoshiura, Takashi

    2017-04-01

    To determine the diagnostic performance of apparent diffusion coefficient (ADC) histogram analysis in diffusion-weighted (DW) magnetic resonance imaging (MRI) for differentiating adrenal adenoma from pheochromocytoma. We retrospectively evaluated 52 adrenal tumors (39 adenomas and 13 pheochromocytomas) in 47 patients (21 men, 26 women; mean age, 59.3 years; range, 16-86 years) who underwent DW 3.0T MRI. Histogram parameters of ADC (b-values of 0 and 200 [ADC 200 ], 0 and 400 [ADC 400 ], and 0 and 800 s/mm 2 [ADC 800 ])-mean, variance, coefficient of variation (CV), kurtosis, skewness, and entropy-were compared between adrenal adenomas and pheochromocytomas, using the Mann-Whitney U-test. Receiver operating characteristic (ROC) curves for the histogram parameters were generated to differentiate adrenal adenomas from pheochromocytomas. Sensitivity and specificity were calculated by using a threshold criterion that would maximize the average of sensitivity and specificity. Variance and CV of ADC 800 were significantly higher in pheochromocytomas than in adrenal adenomas (P histogram parameters for diagnosing adrenal adenomas (ADC 200 , 0.82; ADC 400 , 0.87; and ADC 800 , 0.92), with sensitivity of 84.6% and specificity of 84.6% (cutoff, ≤2.82) with ADC 200 ; sensitivity of 89.7% and specificity of 84.6% (cutoff, ≤2.77) with ADC 400 ; and sensitivity of 94.9% and specificity of 92.3% (cutoff, ≤2.67) with ADC 800 . ADC histogram analysis of DW MRI can help differentiate adrenal adenoma from pheochromocytoma. 3 J. Magn. Reson. Imaging 2017;45:1195-1203. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  9. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  10. Reproducibility of somatosensory spatial perceptual maps.

    Science.gov (United States)

    Steenbergen, Peter; Buitenweg, Jan R; Trojan, Jörg; Veltink, Peter H

    2013-02-01

    Various studies have shown subjects to mislocalize cutaneous stimuli in an idiosyncratic manner. Spatial properties of individual localization behavior can be represented in the form of perceptual maps. Individual differences in these maps may reflect properties of internal body representations, and perceptual maps may therefore be a useful method for studying these representations. For this to be the case, individual perceptual maps need to be reproducible, which has not yet been demonstrated. We assessed the reproducibility of localizations measured twice on subsequent days. Ten subjects participated in the experiments. Non-painful electrocutaneous stimuli were applied at seven sites on the lower arm. Subjects localized the stimuli on a photograph of their own arm, which was presented on a tablet screen overlaying the real arm. Reproducibility was assessed by calculating intraclass correlation coefficients (ICC) for the mean localizations of each electrode site and the slope and offset of regression models of the localizations, which represent scaling and displacement of perceptual maps relative to the stimulated sites. The ICCs of the mean localizations ranged from 0.68 to 0.93; the ICCs of the regression parameters were 0.88 for the intercept and 0.92 for the slope. These results indicate a high degree of reproducibility. We conclude that localization patterns of non-painful electrocutaneous stimuli on the arm are reproducible on subsequent days. Reproducibility is a necessary property of perceptual maps for these to reflect properties of a subject's internal body representations. Perceptual maps are therefore a promising method for studying body representations.

  11. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  12. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  13. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  14. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  15. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  16. Analytical study of friction coefficients of pomegranate seed as essential parameters in design of post-harvest equipment

    Directory of Open Access Journals (Sweden)

    S.M. Shafaei

    2016-09-01

    Full Text Available Friction coefficients (static friction coefficient (SFC and dynamic friction coefficient (DFC of pomegranate seed on different structural surfaces (glass, aluminum, plywood, galvanized steel and rubber as affected by moisture content (4–21.9% (d. b. and sliding velocity (1.4–16 (cm/s were investigated. Analysis of variance (ANOVA was performed to determine the effect of main treatments and their interactions on SFC and DFC. Significance of single or multiple effect of the main treatments with five levels was assessed using Duncan’s multiple range test (DMRT. To predict SFC and DFC, multiple linear regression (MLR modeling technique was applied for each type of structural surface. The goodness of fit of each MLR model was evaluated using statistical parameters: coefficient of determination, root mean square error and mean relative deviation modulus. Results showed that the minimum and maximum SFC or DFC were in minimum and maximum moisture content on glass and rubber surface, respectively. ANOVA table indicated the significant effect of main treatments and their interactions on SFC and DFC at significance level of 1% (P < 0.01. According to DMRT results, SFC linearly increased as moisture content increased and DFC increased also linearly as individual or simultaneous increment of moisture content and sliding velocity occurred, for all experimental conditions. According to the obtained statistical parameters, both SFC and DFC were properly predicted by means of MLR modeling technique.

  17. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  18. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    Science.gov (United States)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  19. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation

    International Nuclear Information System (INIS)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming

    2016-01-01

    Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10

  20. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  1. Flood mapping with multitemporal MODIS data

    Science.gov (United States)

    Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru

    2014-05-01

    Flood is one of the most devastating and frequent disasters resulting in loss of human life and serve damage to infrastructure and agricultural production. Flood is phenomenal in the Mekong River Delta (MRD), Vietnam. It annually lasts from July to November. Information on spatiotemporal flood dynamics is thus important for planners to devise successful strategies for flood monitoring and mitigation of its negative effects. The main objective of this study is to develop an approach for weekly mapping flood dynamics with the Moderate Resolution Imaging Spectroradiometer data in MRD using the water fraction model (WFM). The data processed for 2009 comprises three main steps: (1) data pre-processing to construct smooth time series of the difference in the values (DVLE) between land surface water index (LSWI) and enhanced vegetation index (EVI) using the empirical mode decomposition (EMD), (2) flood derivation using WFM, and (3) accuracy assessment. The mapping results were compared with the ground reference data, which were constructed from Envisat Advanced Synthetic Aperture Radar (ASAR) data. As several error sources, including mixed-pixel problems and low-resolution bias between the mapping results and ground reference data, could lower the level of classification accuracy, the comparisons indicated satisfactory results with the overall accuracy of 80.5% and Kappa coefficient of 0.61, respectively. These results were reaffirmed by a close correlation between the MODIS-derived flood area and that of the ground reference map at the provincial level, with the correlation coefficients (R2) of 0.93. Considering the importance of remote sensing for monitoring floods and mitigating the damage caused by floods to crops and infrastructure, this study eventually leads to the realization of the value of using time-series MODIS DVLE data for weekly flood monitoring in MRD with the aid of EMD and WFM. Such an approach that could provide quantitative information on

  2. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  3. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  4. Apparent diffusion coefficients of breast tumors. Clinical application

    International Nuclear Information System (INIS)

    Hatakenaka, Masamitsu; Soeda, Hiroyasu; Yabuuchi, Hidetake; Matsuo, Yoshio; Kamitani, Takeshi; Oda, Yoshinao; Tsuneyoshi, Masazumi; Honda, Hiroshi

    2008-01-01

    The purpose of this study was to evaluate the usefulness of apparent diffusion coefficient (ADC) for the differential diagnosis of breast tumors and to determine the relation between ADC and tumor cellularity. One hundred and thirty-six female patients (age range, 17-83 years; average age, 51.7 years) with 140 histologically proven breast tumors underwent diffusion-weighted magnetic resonance (MR) imaging (DWI) using the spin-echo echo-planar technique, and the ADCs of the tumors were calculated using 3 different b values, 0, 500, and 1000 s/mm 2 . The diagnoses consisted of fibroadenoma (FA, n=16), invasive ductal carcinoma, not otherwise specified (IDC, n=117), medullary carcinoma (ME, n=3) and mucinous carcinoma (MU, n=4). Tumor cellularity was calculated from surgical specimens. The ADCs of breast tumors and cellularity were compared between different histological types by analysis of variance and Scheffe's post hoc test. The correlation between tumor cellularity and ADC was analyzed by Pearson correlation test. Significant differences were observed in ADCs between FA and all types of cancers (P 2 =0.451). The ADC may potentially help in differentiating benign and malignant breast tumors. Tumor ADC correlates inversely with tumor cellularity. (author)

  5. Heterogeneity of variance components for preweaning growth in Romane sheep due to the number of lambs reared

    Directory of Open Access Journals (Sweden)

    Poivey Jean-Paul

    2011-09-01

    Full Text Available Abstract Background The pre-weaning growth rate of lambs, an important component of meat market production, is affected by maternal and direct genetic effects. The French genetic evaluation model takes into account the number of lambs suckled by applying a multiplicative factor (1 for a lamb reared as a single, 0.7 for twin-reared lambs to the maternal genetic effect, in addition to including the birth*rearing type combination as a fixed effect, which acts on the mean. However, little evidence has been provided to justify the use of this multiplicative model. The two main objectives of the present study were to determine, by comparing models of analysis, 1 whether pre-weaning growth is the same trait in single- and twin-reared lambs and 2 whether the multiplicative coefficient represents a good approach for taking this possible difference into account. Methods Data on the pre-weaning growth rate, defined as the average daily gain from birth to 45 days of age on 29,612 Romane lambs born between 1987 and 2009 at the experimental farm of La Sapinière (INRA-France were used to compare eight models that account for the number of lambs per dam reared in various ways. Models were compared using the Akaike information criteria. Results The model that best fitted the data assumed that 1 direct (maternal effects correspond to the same trait regardless of the number of lambs reared, 2 the permanent environmental effects and variances associated with the dam depend on the number of lambs reared and 3 the residual variance depends on the number of lambs reared. Even though this model fitted the data better than a model that included a multiplicative coefficient, little difference was found between EBV from the different models (the correlation between EBV varied from 0.979 to 0.999. Conclusions Based on experimental data, the current genetic evaluation model can be improved to better take into account the number of lambs reared. Thus, it would be of

  6. An optimal strategy for functional mapping of dynamic trait loci.

    Science.gov (United States)

    Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling

    2010-02-01

    As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.

  7. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  8. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  9. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  10. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  11. Weyl q-coefficients for uq(3) and Racah q -coefficients for suq(2)

    International Nuclear Information System (INIS)

    Asherova, R.M.; Smirnov, Yu.F.; Tolstoy, V.N.

    1996-01-01

    With the aid of the projection-operator technique, the general analytic expression for the elements of the matrix that relates the U and T bases of an arbitrary finite-dimensional irreducible representation of the uq(3) quantum algebra (Weyl q-coefficients) is obtained for the case where the deformation parameter q is not equal to a square root of unity. The procedure for resummation of q-factorial expressions is used to prove that, modulo phase factors, these Weyl q-coefficients coincide with Racah q-coefficients for the suq(2) quantum algebra. It is also shown that, on the basis of one general formula, the q-analogs of all known general analytic expressions for the 6j symbols (and Racah coefficients) of the Lie algebras of the angular momentum can be obtained by using this resummation procedure. The symmetry properties of these q coefficients are discussed. The result is formulated in the following way: the general formulas for the q-6j symbols (Racah q-coefficients) of the suq(2) quantum algebra are obtained from the general formulas for the conventional 6j symbols (Racah coefficients) of the su(2) Lie algebra by replacing directly all factorials with q-factorials, the symmetry properties of the q-6j symbols being completely coincident with the symmetry properties of the conventional 6j symbols

  12. Use of regression‐based models to map sensitivity of aquatic resources to atmospheric deposition in Yosemite National Park, USA

    Science.gov (United States)

    Clow, David W.; Nanus, Leora; Huggett, Brian

    2010-01-01

    An abundance of exposed bedrock, sparse soil and vegetation, and fast hydrologic flushing rates make aquatic ecosystems in Yosemite National Park susceptible to nutrient enrichment and episodic acidification due to atmospheric deposition of nitrogen (N) and sulfur (S). In this study, multiple linear regression (MLR) models were created to estimate fall‐season nitrate and acid neutralizing capacity (ANC) in surface water in Yosemite wilderness. Input data included estimated winter N deposition, fall‐season surface‐water chemistry measurements at 52 sites, and basin characteristics derived from geographic information system layers of topography, geology, and vegetation. The MLR models accounted for 84% and 70% of the variance in surface‐water nitrate and ANC, respectively. Explanatory variables (and the sign of their coefficients) for nitrate included elevation (positive) and the abundance of neoglacial and talus deposits (positive), unvegetated terrain (positive), alluvium (negative), and riparian (negative) areas in the basins. Explanatory variables for ANC included basin area (positive) and the abundance of metamorphic rocks (positive), unvegetated terrain (negative), water (negative), and winter N deposition (negative) in the basins. The MLR equations were applied to 1407 stream reaches delineated in the National Hydrography Data Set for Yosemite, and maps of predicted surface‐water nitrate and ANC concentrations were created. Predicted surface‐water nitrate concentrations were highest in small, high‐elevation cirques, and concentrations declined downstream. Predicted ANC concentrations showed the opposite pattern, except in high‐elevation areas underlain by metamorphic rocks along the Sierran Crest, which had relatively high predicted ANC (>200 μeq L−1). Maps were created to show where basin characteristics predispose aquatic resources to nutrient enrichment and acidification effects from N and S deposition. The maps can be used to help guide

  13. Use of regression-based models to map sensitivity of aquatic resources to atmospheric deposition in Yosemite National Park, USA

    Science.gov (United States)

    Clow, D. W.; Nanus, L.; Huggett, B. W.

    2010-12-01

    An abundance of exposed bedrock, sparse soil and vegetation, and fast hydrologic flushing rates make aquatic ecosystems in Yosemite National Park susceptible to nutrient enrichment and episodic acidification due to atmospheric deposition of nitrogen (N) and sulfur (S). In this study, multiple-linear regression (MLR) models were created to estimate fall-season nitrate and acid neutralizing capacity (ANC) in surface water in Yosemite wilderness. Input data included estimated winter N deposition, fall-season surface-water chemistry measurements at 52 sites, and basin characteristics derived from geographic information system layers of topography, geology, and vegetation. The MLR models accounted for 84% and 70% of the variance in surface-water nitrate and ANC, respectively. Explanatory variables (and the sign of their coefficients) for nitrate included elevation (positive) and the abundance of neoglacial and talus deposits (positive), unvegetated terrain (positive), alluvium (negative), and riparian (negative) areas in the basins. Explanatory variables for ANC included basin area (positive) and the abundance of metamorphic rocks (positive), unvegetated terrain (negative), water (negative), and winter N deposition (negative) in the basins. The MLR equations were applied to 1407 stream reaches delineated in the National Hydrography Dataset for Yosemite, and maps of predicted surface-water nitrate and ANC concentrations were created. Predicted surface-water nitrate concentrations were highest in small, high-elevation cirques, and concentrations declined downstream. Predicted ANC concentrations showed the opposite pattern, except in high-elevation areas underlain by metamorphic rocks along the Sierran Crest, which had relatively high predicted ANC (>200 µeq L-1). Maps were created to show where basin characteristics predispose aquatic resources to nutrient enrichment and acidification effects from N and S deposition. The maps can be used to help guide development of

  14. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  15. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    International Nuclear Information System (INIS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-01-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix

  16. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    Science.gov (United States)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  17. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  19. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  20. The Truth About Ballistic Coefficients

    OpenAIRE

    Courtney, Michael; Courtney, Amy

    2007-01-01

    The ballistic coefficient of a bullet describes how it slows in flight due to air resistance. This article presents experimental determinations of ballistic coefficients showing that the majority of bullets tested have their previously published ballistic coefficients exaggerated from 5-25% by the bullet manufacturers. These exaggerated ballistic coefficients lead to inaccurate predictions of long range bullet drop, retained energy and wind drift.

  1. Mapping Spaces, Centralizers, and p-Local Finite Groups of Lie Type

    DEFF Research Database (Denmark)

    Laude, Isabelle

    We study the space of maps from the classifying space of a finite p-group to theBorel construction of a finite group of Lie type G in characteristic p acting on itsbuilding. The first main result is a description of the homology with Fp-coefficients,showing that the mapping space, up to p...... between a finite p-group and theuncompleted classifying space of the p-local finite group coming from a finite groupof Lie type in characteristic p, providing some of the first results in this uncompletedsetting.......-completion, is a disjoint union indexedover the group homomorphism up to conjugation of classifying spaces of centralizersof p-subgroups in the underlying group G. We complement this description bydetermining the actual homotopy groups of the mapping space. These resultstranslate to descriptions of the space of maps...

  2. A Monte Carlo experiment to analyze the curse of dimensionality in estimating random coefficients models with a full variance–covariance matrix

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian Angelo

    2012-01-01

    of parameters increases is usually known as the “curse of dimensionality” in the simulation methods. We investigate this problem in the case of the random coefficients Logit model. We compare the traditional Maximum Simulated Likelihood (MSL) method with two alternative estimation methods: the Expectation......–Maximization (EM) and the Laplace Approximation (HH) methods that do not require simulation. We use Monte Carlo experimentation to investigate systematically the performance of the methods under different circumstances, including different numbers of variables, sample sizes and structures of the variance...

  3. Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs

    Directory of Open Access Journals (Sweden)

    van Kampen Tony A

    2009-01-01

    Full Text Available Abstract Quantitative trait loci (QTL affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA, several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB, the maternal lines (e.g. Ham or in both (e.g. pHu. Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only.

  4. Tensor models, Kronecker coefficients and permutation centralizer algebras

    Science.gov (United States)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  5. A model for implementing soundscape maps in smart cities

    Directory of Open Access Journals (Sweden)

    Kang Jian

    2018-04-01

    Full Text Available Smart cities are required to engage with local communities by promoting a user-centred approach to deal with urban life issues and ultimately enhance people’s quality of life. Soundscape promotes a similar approach, based on individuals’ perception of acoustic environments. This paper aims to establish a model to implement soundscape maps for the monitoring and management of the acoustic environment and to demonstrate its feasibility. The final objective of the model is to generate visual maps related to perceptual attributes (e.g. ‘calm’, ‘pleasant’, starting from audio recordings of everyday acoustic environments. The proposed model relies on three main stages: (1 sound sources recognition and profiling, (2 prediction of the soundscape’s perceptual attributes and (3 implementation of soundscape maps. This research particularly explores the two latter phases, for which a set of sub-processes and methods is proposed and discussed. An accuracy analysiswas performed with satisfactory results: the prediction models of the second stage explained up to the 57.5% of the attributes’ variance; the cross-validation errors of the model were close to zero. These findings show that the proposed model is likely to produce representative maps of an individual’s sonic perception in a given environment.

  6. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  7. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  8. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  9. Development of database on the distribution coefficient. 1. Collection of the distribution coefficient data

    Energy Technology Data Exchange (ETDEWEB)

    Takebe, Shinichi; Abe, Masayoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The distribution coefficient is very important parameter for environmental impact assessment on the disposal of radioactive waste arising from research institutes. The literature survey in the country was mainly carried out for the purpose of selecting the reasonable distribution coefficient value on the utilization of this value in the safety evaluation. This report was arranged much informations on the distribution coefficient for inputting to the database for each literature, and was summarized as a literature information data on the distribution coefficient. (author)

  10. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  11. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  12. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  13. Wavelet analysis of polarization maps of polycrystalline biological fluids networks

    Science.gov (United States)

    Ushenko, Y. A.

    2011-12-01

    The optical model of human joints synovial fluid is proposed. The statistic (statistic moments), correlation (autocorrelation function) and self-similar (Log-Log dependencies of power spectrum) structure of polarization two-dimensional distributions (polarization maps) of synovial fluid has been analyzed. It has been shown that differentiation of polarization maps of joint synovial fluid with different physiological state samples is expected of scale-discriminative analysis. To mark out of small-scale domain structure of synovial fluid polarization maps, the wavelet analysis has been used. The set of parameters, which characterize statistic, correlation and self-similar structure of wavelet coefficients' distributions of different scales of polarization domains for diagnostics and differentiation of polycrystalline network transformation connected with the pathological processes, has been determined.

  14. Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping

    Science.gov (United States)

    Smith, Timothy; Pantano, Carlos

    2017-11-01

    We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.

  15. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  16. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  17. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  18. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  19. Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-05-01

    Full Text Available Abstract Background The Bayesian shrinkage technique has been applied to multiple quantitative trait loci (QTLs mapping to estimate the genetic effects of QTLs on quantitative traits from a very large set of possible effects including the main and epistatic effects of QTLs. Although the recently developed empirical Bayes (EB method significantly reduced computation comparing with the fully Bayesian approach, its speed and accuracy are limited by the fact that numerical optimization is required to estimate the variance components in the QTL model. Results We developed a fast empirical Bayesian LASSO (EBLASSO method for multiple QTL mapping. The fact that the EBLASSO can estimate the variance components in a closed form along with other algorithmic techniques render the EBLASSO method more efficient and accurate. Comparing with the EB method, our simulation study demonstrated that the EBLASSO method could substantially improve the computational speed and detect more QTL effects without increasing the false positive rate. Particularly, the EBLASSO algorithm running on a personal computer could easily handle a linear QTL model with more than 100,000 variables in our simulation study. Real data analysis also demonstrated that the EBLASSO method detected more reasonable effects than the EB method. Comparing with the LASSO, our simulation showed that the current version of the EBLASSO implemented in Matlab had similar speed as the LASSO implemented in Fortran, and that the EBLASSO detected the same number of true effects as the LASSO but a much smaller number of false positive effects. Conclusions The EBLASSO method can handle a large number of effects possibly including both the main and epistatic QTL effects, environmental effects and the effects of gene-environment interactions. It will be a very useful tool for multiple QTL mapping.

  20. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.