WorldWideScience

Sample records for minimum variance bulk

  1. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  2. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  3. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  4. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  5. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  6. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  7. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  8. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  9. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  10. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  11. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  12. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  13. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  14. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Louis A; Mason, John J.

    2018-04-01

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.

  15. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  16. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  17. An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound

    DEFF Research Database (Denmark)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt

    2016-01-01

    Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...

  18. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  19. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  20. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  1. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  2. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...

  3. Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2017-01-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...

  4. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  5. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  6. A phantom study on temporal and subband Minimum Variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    This paper compares experimentally temporal and subband implementations of the Minimum Variance (MV) adaptive beamformer for medical ultrasound imaging. The performance of the two approaches is tested by comparing wire phantom measurements, obtained by the research ultrasound scanner SARUS. A 7 MHz...... BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...... from conventional Delay-and-Sum (DAS) beamformer. FWHM measured at the depth of 46.6 mm, is 0.02 mm (0.09λ) for both adaptive methods while the corresponding values for Hanning and Boxcar weights are 0.64 and 0.44 mm respectively. Between the MV beamformers a -2 dB difference in PSL is noticed in favor...

  7. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  8. Effects of Important Parameters Variations on Computing Eigenspace-Based Minimum Variance Weights for Ultrasound Tissue Harmonic Imaging

    OpenAIRE

    Heidari, Mehdi Haji; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-01-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signa...

  9. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  10. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...... ultrasound scanner for the data acquisition. The lateral resolution and the contrast obtained, are evaluated and compared with those from the conventional Delay-and-Sum (DAS) beamformer and the MV temporal implementation (MVT). From the wire phantom the Full-Width-at-Half-Maximum (FWHM) measured at a depth...

  11. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    Science.gov (United States)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  12. A MAD Explanation for the Correlation between Bulk Lorentz Factor and Minimum Variability Timescale

    Science.gov (United States)

    Lloyd-Ronning, Nicole; Lei, Wei-hua; Xie, Wei

    2018-04-01

    We offer an explanation for the anti-correlation between the minimum variability timescale (MTS) in the prompt emission light curve of gamma-ray bursts (GRBs) and the estimated bulk Lorentz factor of these GRBs, in the context of a magnetically arrested disk (MAD) model. In particular, we show that previously derived limits on the maximum available energy per baryon in a Blandford-Znajek jet leads to a relationship between the characteristic MAD timescale in GRBs and the maximum bulk Lorentz factor: tMAD∝Γ-6, somewhat steeper than (although within the error bars of) the fitted relationship found in the GRB data. Similarly, the MAD model also naturally accounts for the observed anti-correlation between MTS and gamma-ray luminosity L in the GRB data, and we estimate the accretion rates of the GRB disk (given these luminosities) in the context of this model. Both of these correlations (MTS - Γ and MTS - L) are also observed in the AGN data, and we discuss the implications of our results in the context of both GRB and blazar systems.

  13. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  14. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  15. Output Power Control of Wind Turbine Generator by Pitch Angle Control using Minimum Variance Control

    Science.gov (United States)

    Senjyu, Tomonobu; Sakamoto, Ryosei; Urasaki, Naomitsu; Higa, Hiroki; Uezato, Katsumi; Funabashi, Toshihisa

    In recent years, there have been problems such as exhaustion of fossil fuels, e. g., coal and oil, and environmental pollution resulting from consumption. Effective utilization of renewable energies such as wind energy is expected instead of the fossil fuel. Wind energy is not constant and windmill output is proportional to the cube of wind speed, which cause the generated power of wind turbine generators (WTGs) to fluctuate. In order to reduce fluctuating components, there is a method to control pitch angle of blades of the windmill. In this paper, output power leveling of wind turbine generator by pitch angle control using an adaptive control is proposed. A self-tuning regulator is used in adaptive control. The control input is determined by the minimum variance control. It is possible to compensate control input to alleviate generating power fluctuation with using proposed controller. The simulation results with using actual detailed model for wind power system show effectiveness of the proposed controller.

  16. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A comparison between temporal and subband minimum variance adaptive beamforming

    Science.gov (United States)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar

  18. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  19. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  20. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  1. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  2. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to

  3. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  4. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  5. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  6. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  7. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  8. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  9. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  10. A method for minimum risk portfolio optimization under hybrid uncertainty

    Science.gov (United States)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  11. The Effect of Bulk Tachyon Field on the Dynamics of Geometrical Tachyon

    International Nuclear Information System (INIS)

    Papantonopoulos, Eleftherios; Pappa, Ioanna; Zamarias, Vassilios

    2007-01-01

    We study the dynamics of the geometrical tachyon field on an unstable D3-brane in the background of a bulk tachyon field of a D3-brane solution of Type-0 string theory. We find that the geometrical tachyon potential is modified by a function of the bulk tachyon and inflation occurs at weak string coupling, where the bulk tachyon condenses, near the top of the geometrical tachyon potential. We also find a late accelerating phase when the bulk tachyon asymptotes to zero and the geometrical tachyon field reaches the minimum of the potential

  12. Controlled levitation of Y-Ba-Cu-O bulk superconductors and energy minimum analysis; Y-Ba-Cu-O baruku chodendotai no fujo to enerugi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Magashima, K. [Railway Technical Research Institute, Tokyo (Japan); Iwasa, Y. [Francis Bitter Magnet Laboratory, Canbridge (United States); Sawa, K. [keio University, Tokyo (Japan); Murakami, M. [Superconductivity research Laboratory, Tokyo (Japan)

    1999-11-25

    The levitation of bulk Y-Ba-Cu-O superconductors can be controlled using a Bi-Sr-Ca-Cu-O (Bi2223) superconducting electromagnet. It was found that stable levitation without tilting could be obtained only when the sample trapped a certain amount of fields, the minimum of which depended on the external field and sample dimensions. We employed a novel analysis method for levitation based on the total energy balance, which is much simpler than the force method and could be applied to understanding general levitation behavior. Numerical analyses thus developed showed that stable levitation of superconductors with large dimensions cen only be achieved when the induced currents can flow with three-dimensional freedom. (author)

  13. Bulk Superconductors in Mobile Application

    Science.gov (United States)

    Werfel, F. N.; Delor, U. Floegel-; Rothfeld, R.; Riedel, T.; Wippich, D.; Goebel, B.; Schirrmeister, P.

    We investigate and review concepts of multi - seeded REBCO bulk superconductors in mobile application. ATZ's compact HTS bulk magnets can trap routinely 1 T@77 K. Except of magnetization, flux creep and hysteresis, industrial - like properties as compactness, power density, and robustness are of major device interest if mobility and light-weight construction is in focus. For mobile application in levitated trains or demonstrator magnets we examine the performance of on-board cryogenics either by LN2 or cryo-cooler application. The mechanical, electric and thermodynamical requirements of compact vacuum cryostats for Maglev train operation were studied systematically. More than 30 units are manufactured and tested. The attractive load to weight ratio is more than 10 and favours group module device constructions up to 5 t load on permanent magnet (PM) track. A transportable and compact YBCO bulk magnet cooled with in-situ 4 Watt Stirling cryo-cooler for 50 - 80 K operation is investigated. Low cooling power and effective HTS cold mass drives the system construction to a minimum - thermal loss and light-weight design.

  14. Comparison of wet-only and bulk deposition at Chiang Mai (Thailand) based on rainwater chemical composition

    Science.gov (United States)

    Chantara, Somporn; Chunsuk, Nawarut

    The chemical composition of 122 rainwater samples collected daily from bulk and wet-only collectors in a sub-urban area of Chiang Mai (Thailand) during August 2005-July 2006 has been analyzed and compared to assess usability of a cheaper and less complex bulk collector over a sophisticated wet-only collector. Statistical analysis was performed on log-transformed daily rain amount and depositions of major ions for each collector type. The analysis of variance (ANOVA) test revealed that the amount of rainfall collected from a rain gauge, bulk collector and wet-only collector showed no significant difference ( ∝=0.05). The volume weight mean electro-conductivity (EC) values of bulk and wet-only samples were 0.69 and 0.65 mS/m, respectively. The average pH of the samples from both types of collectors was 5.5. Scatter plots between log-transformed depositions of specific ions obtained from bulk and wet-only samples showed high correlation ( r>0.91). Means of log-transformed bulk deposition were 14% (Na + and K +), 13% (Mg 2+), 7% (Ca 2+), 4% (NO 3-), 3% (SO 42- and Cl -) and 2% (NH 4+) higher than that of wet-only deposition. However, multivariate analysis of variance (MANOVA) revealed that ion depositions obtained from bulk and wet-only collectors were not significantly different ( ∝=0.05). Therefore, it was concluded that a bulk collector can be used instead of a wet-only collector in a sub-urban area.

  15. STUDY LINKS SOLVING THE MAXIMUM TASK OF LINEAR CONVOLUTION «EXPECTED RETURNS-VARIANCE» AND THE MINIMUM VARIANCE WITH RESTRICTIONS ON RETURNS

    Directory of Open Access Journals (Sweden)

    Maria S. Prokhorova

    2014-01-01

    Full Text Available The article deals with a study of problemsof finding the optimal portfolio securitiesusing convolutions expectation of portfolioreturns and portfolio variance. Value of thecoefficient of risk, in which the problem ofmaximizing the variance - limited yieldis equivalent to maximizing a linear convolution of criteria for «expected returns-variance» is obtained. An automated method for finding the optimal portfolio, onthe basis of which the results of the studydemonstrated is proposed.

  16. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  17. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  18. The solar and interplanetary causes of the recent minimum in geomagnetic activity (MGA23: a combination of midlatitude small coronal holes, low IMF BZ variances, low solar wind speeds and low solar magnetic fields

    Directory of Open Access Journals (Sweden)

    B. T. Tsurutani

    2011-05-01

    Full Text Available Minima in geomagnetic activity (MGA at Earth at the ends of SC23 and SC22 have been identified. The two MGAs (called MGA23 and MGA22, respectively were present in 2009 and 1997, delayed from the sunspot number minima in 2008 and 1996 by ~1/2–1 years. Part of the solar and interplanetary causes of the MGAs were exceptionally low solar (and thus low interplanetary magnetic fields. Another important factor in MGA23 was the disappearance of equatorial and low latitude coronal holes and the appearance of midlatitude coronal holes. The location of the holes relative to the ecliptic plane led to low solar wind speeds and low IMF (Bz variances (σBz2 and normalized variances (σBz2/B02 at Earth, with concomitant reduced solar wind-magnetospheric energy coupling. One result was the lowest ap indices in the history of ap recording. The results presented here are used to comment on the possible solar and interplanetary causes of the low geomagnetic activity that occurred during the Maunder Minimum.

  19. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  20. The Effect of Bulk Depth and Irradiation Time on the Surface Hardness and Degree of Cure of Bulk-Fill Composites

    Directory of Open Access Journals (Sweden)

    Farahat F

    2016-09-01

    Full Text Available Statement of Problem: For many years, application of the composite restoration with a thickness less than 2 mm for achieving the minimum polymerization contraction and stress has been accepted as a principle. But through the recent development in dental material a group of resin based composites (RBCs called Bulk Fill is introduced whose producers claim the possibility of achieving a good restoration in bulks with depths of 4 or even 5 mm. Objectives: To evaluate the effect of irradiation times and bulk depths on the degree of cure (DC of a bulk fill composite and compare it with the universal type. Materials and Methods: This study was conducted on two groups of dental RBCs including Tetric N Ceram Bulk Fill and Tetric N Ceram Universal. The composite samples were prepared in Teflon moulds with a diameter of 5 mm and height of 2, 4 and 6 mm. Then, half of the samples in each depth were cured from the upper side of the mould for 20s by LED light curing unit. The irradiation time for other specimens was 40s. After 24 hours of storage in distilled water, the microhardness of the top and bottom of the samples was measured using a Future Tech (Japan- Model FM 700 Vickers hardness testing machine. Data were analyzed statistically using the one and multi way ANOVAand Tukey’s test (p = 0.050. Results: The DC of Tetric N Ceram Bulk Fill in defined irradiation time and bulk depth was significantly more than the universal type (p < 0.001. Also, the DC of both composites studied was significantly (p < 0.001 reduced by increasing the bulk depths. Increasing the curing time from 20 to 40 seconds had a marginally significant effect (p ≤ 0.040 on the DC of both bulk fill and universal studied RBC samples. Conclusions: The DC of the investigated bulk fill composite was better than the universal type in all the irradiation times and bulk depths. The studied universal and bulk fill RBCs had an appropriate DC at the 2 and 4 mm bulk depths respectively and

  1. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  2. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  3. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  4. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  5. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    Science.gov (United States)

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  6. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  7. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  8. Comparison of bulk Micromegas with different amplification gaps

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Purba, E-mail: purba.bhattacharya@saha.ac.in [Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Bhattacharya, Sudeb [Emeritus Scientist (CSIR), Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Majumdar, Nayana; Mukhopadhyay, Supratik; Sarkar, Sandip [Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Colas, Paul; Attie, David [DSM/IRFU, CEA/Saclay, F-91191 Gif-sur-Yvette CEDEX (France)

    2013-12-21

    The bulk Micromegas detector is considered to be a promising candidate for building TPCs for several future experiments including the projected linear collider. The standard bulk with a spacing of 128μm has already established itself as a good choice for its performances in terms of gas gain uniformity, energy and space point resolution, and its capability to efficiently pave large readout surfaces with minimum dead zone. The present work involves the comparison of this standard bulk with a relatively less used bulk Micromegas detector having a larger amplification gap of 192μm. Detector gain, energy resolution and electron transparency of these Micromegas have been measured under different conditions in various Argon-based gas mixtures to evaluate their performance. These measured characteristics have also been compared in detail to numerical simulations using the Garfield framework that combines packages such as neBEM, Magboltz and Heed. Further, we have carried out another numerical study to determine the effect of dielectric spacers on different detector features. A comprehensive comparison of the two detectors has been presented and analyzed in this work. -- Highlights: •We present a comparative study between bulk Micromegas having different amplification gaps. •Various detector characteristics such as gain, electron transparency, energy resolution have been measured experimentally. •Successful comparisons of these measured data with the simulation results indicate that the device physics is quite well understood. •A numerical study to determine the effect of dielectric spacers on different detect or features has been carried out.

  9. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  10. Bulk and shear viscosities of the gluon plasma in a quasiparticle description

    CERN Document Server

    Bluhm, M; Redlich, K

    2011-01-01

    Bulk and shear viscosities of deconfined gluonic matter are investigated within an effective kinetic theory by describing the strongly interacting medium phenomenologically in terms of quasiparticle excitations with medium-dependent self-energies. In this approach, local conservation of energy and momentum follows from a Boltzmann-Vlasov type kinetic equation and guarantees thermodynamic self-consistency. We show that the resulting transport coefficients reproduce the parametric dependencies on temperature and coupling obtained in perturbative QCD at large temperatures and small running coupling. The extrapolation into the non-perturbative regime results in a decreasing specific shear viscosity with decreasing temperature, exhibiting a minimum in the vicinity of the deconfinement transition temperature, while the specific bulk viscosity is sizeable in this region falling off rapidly with increasing temperature. The temperature dependence of specific bulk and shear viscosities found within this quasiparticle d...

  11. Cuspal Flexure and Extent of Cure of a Bulk-fill Flowable Base Composite.

    Science.gov (United States)

    Francis, A V; Braxton, A D; Ahmad, W; Tantbirojn, D; Simon, J F; Versluis, A

    2015-01-01

    To investigate a bulk-fill flowable base composite (Surefil SDR Flow) in terms of cuspal flexure and cure when used in incremental or bulk techniques. Mesio-occluso-distal cavities (4 mm deep, 4 mm wide) were prepared in 24 extracted molars. The slot-shaped cavities were etched, bonded, and restored in 1) two 2-mm increments Esthet-X HD (control), 2) two 2-mm increments Surefil SDR Flow, or 3) 4-mm bulk Surefil SDR Flow (N=8). The teeth were digitized after preparation (baseline) and restoration and were precisely aligned to calculate cuspal flexure. Restored teeth were placed in fuchsin dye for 16 hours to determine occlusal bond integrity from dye penetration. Extent of cure was assessed by hardness at 0.5-mm increments through the restoration depth. Results were analyzed with analysis of variance and Student-Newman-Keuls post hoc tests (α=0.05). Surefil SDR Flow, either incrementally or bulk filled, demonstrated significantly less cuspal flexure than Esthet-X HD. Dye penetration was less than 3% of cavity wall height and was not statistically different among groups. The hardness of Surefil SDR Flow did not change throughout the depth for both incrementally and bulk filled restorations; the hardness of Esthet-X HD was statistically significantly lower at the bottom of each increment than at the top. Filling in bulk or increments made no significant difference in marginal bond quality or cuspal flexure for the bulk-fill composite. However, the bulk-fill composite caused less cuspal flexure than the incrementally placed conventional composite. The bulk-fill composite cured all the way through (4 mm), whereas the conventional composite had lower cure at the bottom of each increment.

  12. Bounds on Minimum Energy per Bit for Optical Wireless Relay Channels

    Directory of Open Access Journals (Sweden)

    A. D. Raza

    2014-09-01

    Full Text Available An optical wireless relay channel (OWRC is the classical three node network consisting of source, re- lay and destination nodes with optical wireless connectivity. The channel law is assumed Gaussian. This paper studies the bounds on minimum energy per bit required for reliable communication over an OWRC. It is shown that capacity of an OWRC is concave and energy per bit is monotonically increasing in square of the peak optical signal power, and consequently the minimum energy per bit is inversely pro- portional to the square root of asymptotic capacity at low signal to noise ratio. This has been used to develop upper and lower bound on energy per bit as a function of peak signal power, mean to peak power ratio, and variance of channel noise. The upper and lower bounds on minimum energy per bit derived in this paper correspond respectively to the decode and forward lower bound and the min-max cut upper bound on OWRC capacity

  13. Solution of the problem of the identified minimum for the tri-variate ...

    Indian Academy of Sciences (India)

    tified minimum is considered below has zero means, and distinct variances. The solution ... and a non-singular covariance matrix , where ij = ρij σi σj for i ...... (i) through (iv) above, we can use (4.29) to identify a2. 21. , a2. 31. , a2. 12. , a2. 32 uniquely. Now we consider (4.28). In this case, there are two possibilities: (A2. 1, B2.

  14. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  15. An empirical analysis of freight rate and vessel price volatility transmission in global dry bulk shipping market

    Directory of Open Access Journals (Sweden)

    Lei Dai

    2015-10-01

    Full Text Available Global dry bulk shipping market is an important element of global economy and trade. Since newbuilding and secondhand vessels are often traded as assets and the freight rate is the key determinant of vessel price, it is important for shipping market participants to understand the market dynamics and price transmission mechanism over time to make suitable strategic decisions. To address this issue, a multi-variate GARCH model was applied in this paper to explore the volatility spillover effects across the vessel markets (including newbuilding and secondhand vessel markets and freight market. Specifically, the BEKK parameterization of the multi-variate GARCH model (BEKK GARCH was proposed to capture the volatility transmission effect from the freight market, newbuilding and secondhand vessel markets in the global dry bulk shipping industry. Empirical results reveal that significant volatility transmission effects exist in each market sector, i.e. capesize, panamax, handymax and handysize. Besides, the market volatility transmission mechanism varies among different vessel types. Moreover, some bilateral effects are found in the dry bulk shipping market, showing that lagged variances could affect the current variance in a counterpart market, regardless of the volatility transmission. A simple ratio is proposed to guide investors optimizing their portfolio allocations. The findings in this paper could provide unique insights for investors to understand the market and hedge their portfolios well.

  16. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  17. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  18. Correlations Between Magnetic Flux and Levitation Force of HTS Bulk Above a Permanent Magnet Guideway

    Science.gov (United States)

    Huang, Huan; Zheng, Jun; Zheng, Botian; Qian, Nan; Li, Haitao; Li, Jipeng; Deng, Zigang

    2017-10-01

    In order to clarify the correlations between magnetic flux and levitation force of the high-temperature superconducting (HTS) bulk, we measured the magnetic flux density on bottom and top surfaces of a bulk superconductor while vertically moving above a permanent magnet guideway (PMG). The levitation force of the bulk superconductor was measured simultaneously. In this study, the HTS bulk was moved down and up for three times between field-cooling position and working position above the PMG, followed by a relaxation measurement of 300 s at the minimum height position. During the whole processes, the magnetic flux density and levitation force of the bulk superconductor were recorded and collected by a multipoint magnetic field measurement platform and a self-developed maglev measurement system, respectively. The magnetic flux density on the bottom surface reflected the induced field in the superconductor bulk, while on the top, it reveals the penetrated magnetic flux. The results show that the magnetic flux density and levitation force of the bulk superconductor are in direct correlation from the viewpoint of inner supercurrent. In general, this work is instructive for understanding the connection of the magnetic flux density, the inner current density and the levitation behavior of HTS bulk employed in a maglev system. Meanwhile, this magnetic flux density measurement method has enriched present experimental evaluation methods of maglev system.

  19. Room 305/2 of the unit 4 of the Chernobyl ChNPP: its condition, evaluation of the fuel bulk

    International Nuclear Information System (INIS)

    Borovoj, A.A.; Pazukhin, Eh.M.; Lagunenko, A.S.

    1998-01-01

    The question on the bulk of the spent nuclear fuel in the room 305/2 of the Unit 4 has been given consideration. On the basis of the results of direct observations, tele- and photo shooting, data of chemical analysis of samples and measurement of the maximum exposure dose rate on drilling detailed model of the main elements relative position in the former core has been developed. Minimum fuel bulk in the room 305/2 has been evaluated

  20. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  1. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  2. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  3. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  4. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  5. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  6. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  7. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  8. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  9. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  10. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  11. Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances

    Institute of Scientific and Technical Information of China (English)

    QI Wen-Juan; ZHANG Peng; DENG Zi-Li

    2014-01-01

    This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.

  12. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  13. Characterization of phosphorus species in sediments from the Arabian Sea oxygen minimum zone: Combining sequential extractions and X-ray spectroscopy

    NARCIS (Netherlands)

    Kraal, Peter; Bostick, Benjamin C.; Behrends, Thilo; Reichart, Gert-Jan; Slomp, Caroline P.

    2015-01-01

    The bulk phosphorus (P) distribution in sediment samples from the oxygen minimum zone of the northern Arabian Sea was determined using two methods: sequential chemical extraction (the ‘SEDEX’ procedure) and X-ray absorption near-edge structure (XANES) spectroscopy of the phosphorus K-edge. Our

  14. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  15. Raman spectroscopic assessment of degree of conversion of bulk-fill resin composites--changes at 24 hours post cure.

    Science.gov (United States)

    Par, M; Gamulin, O; Marovic, D; Klaric, E; Tarle, Z

    2015-01-01

    The aim of this study was to determine degree of conversion (DC) of solid and flowable bulk-fill composites immediately and after 24 hours and investigate the variations of DC at surface and depths up to 4 mm. Eight bulk-fill composites (Tetric EvoCeram Bulk Fill [shades IVA and IVB], Quixfil, X-tra fil, Venus Bulk Fill, X-tra Base, SDR, Filtek Bulk Fill) were investigated, and two conventional composites (GrandioSO, X-Flow) were used as controls. The samples (n = 5) were cured for 20 seconds with irradiance of 1090 mW/cm(2). Raman spectroscopic measurements were made immediately after curing on sample surfaces and after 24 hours of dark storage at surface and at incremental depths up to 4 mm. Mean DC values were compared using repeated measures analysis of variance (ANOVA) and t-test for dependent samples. Surface DC values immediately after curing ranged from 59.1%-71.8%, while the 24-hour postcure values ranged from 71.3%-86.1%. A significant increase of DC was observed 24 hours post cure for all bulk-fill composites, which amounted from 11.3% to 16.9%. Decrease of DC through depths up to 4 mm varied widely among bulk-fill composites and ranged from 2.9% to 19.7%. All bulk-fill composites presented a considerable 24-hour postcure DC increase and clinically acceptable DC at depths up to 4 mm. Conventional control composites were sufficiently cured only up to 2 mm, despite significant postcure polymerization.

  16. Composting of cow dung and crop residues using termite mounds as bulking agent.

    Science.gov (United States)

    Karak, Tanmoy; Sonar, Indira; Paul, Ranjit K; Das, Sampa; Boruah, R K; Dutta, Amrit K; Das, Dilip K

    2014-10-01

    The present study reports the suitability of termite mounds as a bulking agent for composting with crop residues and cow dung in pit method. Use of 50 kg termite mound with the crop residues (stover of ground nut: 361.65 kg; soybean: 354.59 kg; potato: 357.67 kg and mustard: 373.19 kg) and cow dung (84.90 kg) formed a good quality compost within 70 days of composting having nitrogen, phosphorus and potassium as 20.19, 3.78 and 32.77 g kg(-1) respectively with a bulk density of 0.85 g cm(-3). Other physico-chemical and germination parameters of the compost were within Indian standard, which had been confirmed by the application of multivariate analysis of variance and multivariate contrast analysis. Principal component analysis was applied in order to gain insight into the characteristic variables. Four composting treatments formed two different groups when hierarchical cluster analysis was applied. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Efficiency of polymerization of bulk-fill composite resins: a systematic review.

    Science.gov (United States)

    Reis, André Figueiredo; Vestphal, Mariana; Amaral, Roberto Cesar do; Rodrigues, José Augusto; Roulet, Jean-François; Roscoe, Marina Guimarães

    2017-08-28

    This systematic review assessed the literature to evaluate the efficiency of polymerization of bulk-fill composite resins at 4 mm restoration depth. PubMed, Cochrane, Scopus and Web of Science databases were searched with no restrictions on year, publication status, or article's language. Selection criteria included studies that evaluated bulk-fill composite resin when inserted in a minimum thickness of 4 mm, followed by curing according to the manufacturers' instructions; presented sound statistical data; and comparison with a control group and/or a reference measurement of quality of polymerization. The evidence level was evaluated by qualitative scoring system and classified as high-, moderate- and low- evidence level. A total of 534 articles were retrieved in the initial search. After the review process, only 10 full-text articles met the inclusion criteria. Most articles included (80%) were classified as high evidence level. Among several techniques, microhardness was the most frequently method performed by the studies included in this systematic review. Irrespective to the "in vitro" method performed, bulk fill RBCs were partially likely to fulfill the important requirement regarding properly curing in 4 mm of cavity depth measured by depth of cure and / or degree of conversion. In general, low viscosities BFCs performed better regarding polymerization efficiency compared to the high viscosities BFCs.

  18. Nanoindentation creep versus bulk compressive creep of dental resin-composites.

    Science.gov (United States)

    El-Safty, S; Silikas, N; Akhtar, R; Watts, D C

    2012-11-01

    To evaluate nanoindentation as an experimental tool for characterizing the viscoelastic time-dependent creep of resin-composites and to compare the resulting parameters with those obtained by bulk compressive creep. Ten dental resin-composites: five conventional, three bulk-fill and two flowable were investigated using both nanoindentation creep and bulk compressive creep methods. For nano creep, disc specimens (15mm×2mm) were prepared from each material by first injecting the resin-composite paste into metallic molds. Specimens were irradiated from top and bottom surfaces in multiple overlapping points to ensure optimal polymerization using a visible light curing unit with output irradiance of 650mW/cm(2). Specimens then were mounted in 3cm diameter phenolic ring forms and embedded in a self-curing polystyrene resin. Following grinding and polishing, specimens were stored in distilled water at 37°C for 24h. Using an Agilent Technologies XP nanoindenter equipped with a Berkovich diamond tip (100nm radius), the nano creep was measured at a maximum load of 10mN and the creep recovery was determined when each specimen was unloaded to 1mN. For bulk compressive creep, stainless steel split molds (4mm×6mm) were used to prepare cylindrical specimens which were thoroughly irradiated at 650mW/cm(2) from multiple directions and stored in distilled water at 37°C for 24h. Specimens were loaded (20MPa) for 2h and unloaded for 2h. One-way ANOVA, Levene's test for homogeneity of variance and the Bonferroni post hoc test (all at p≤0.05), plus regression plots, were used for statistical analysis. Dependent on the type of resin-composite material and the loading/unloading parameters, nanoindentation creep ranged from 29.58nm to 90.99nm and permanent set ranged from 8.96nm to 30.65nm. Bulk compressive creep ranged from 0.47% to 1.24% and permanent set ranged from 0.09% to 0.38%. There was a significant (p=0.001) strong positive non-linear correlation (r(2)=0.97) between bulk

  19. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  20. Bulk oil clauses

    International Nuclear Information System (INIS)

    Gough, N.

    1993-01-01

    The Institute Bulk Oil Clauses produced by the London market and the American SP-13c Clauses are examined in detail in this article. The duration and perils covered are discussed, and exclusions, adjustment clause 15 of the Institute Bulk Oil Clauses, Institute War Clauses (Cargo), and Institute Strikes Clauses (Bulk Oil) are outlined. (UK)

  1. Electrochemical corrosion behavior of carbon steel with bulk coating holidays

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With epoxy coal tar as the coating material, the electrochemical corrosion behavior of Q235 with different kinds of bulk coating holidays has been investigated with EIS (Electrochemical Impedance Spectroscopy) in a 3.5vol% NaCl aqueous solution.The area ratio of bulk coating holiday to total coating area of steel is 4.91%. The experimental results showed that at free corrosionpotential, the corrosion of carbon steel with disbonded coating holiday is heavier than that with broken holiday and disbonded & broken holiday with time; Moreover, the effectiveness of Cathodic Protection (CP) of carbon steel with broken holiday is better than that with disbonded holiday and disbonded & broken holiday on CP potential -850 mV (vs CSE). Further analysis indicated that the two main reasons for corrosion are electrolyte solution slowly penetrating the coating, and crevice corrosion at steel/coating interface near holidays. The ratio of impedance amplitude (Z) of different frequency to minimum frequency is defined as K value. The change rate of K with frequency is related to the type of coating holiday.

  2. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  3. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  4. Bulk-Fill Resin Composites

    DEFF Research Database (Denmark)

    Benetti, Ana Raquel; Havndrup-Pedersen, Cæcilie; Honoré, Daniel

    2015-01-01

    the restorative procedure. The aim of this study, therefore, was to compare the depth of cure, polymerization contraction, and gap formation in bulk-fill resin composites with those of a conventional resin composite. To achieve this, the depth of cure was assessed in accordance with the International Organization...... for Standardization 4049 standard, and the polymerization contraction was determined using the bonded-disc method. The gap formation was measured at the dentin margin of Class II cavities. Five bulk-fill resin composites were investigated: two high-viscosity (Tetric EvoCeram Bulk Fill, SonicFill) and three low......-viscosity (x-tra base, Venus Bulk Fill, SDR) materials. Compared with the conventional resin composite, the high-viscosity bulk-fill materials exhibited only a small increase (but significant for Tetric EvoCeram Bulk Fill) in depth of cure and polymerization contraction, whereas the low-viscosity bulk...

  5. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  6. Main Parameters Characterization of Bulk CMOS Cross-Like Hall Structures

    Directory of Open Access Journals (Sweden)

    Maria-Alexandra Paun

    2016-01-01

    Full Text Available A detailed analysis of the cross-like Hall cells integrated in regular bulk CMOS technological process is performed. To this purpose their main parameters have been evaluated. A three-dimensional physical model was employed in order to evaluate the structures. On this occasion, numerical information on the input resistance, Hall voltage, conduction current, and electrical potential distribution has been obtained. Experimental results for the absolute sensitivity, offset, and offset temperature drift have also been provided. A quadratic behavior of the residual offset with the temperature was obtained and the temperature points leading to the minimum offset for the three Hall cells were identified.

  7. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  8. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  9. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  10. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  11. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  12. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  13. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  14. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  15. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  16. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  17. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  18. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  19. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  20. Evidence of an Intermediate Phase in bulk alloy oxide glass sysem

    Science.gov (United States)

    Chakraborty, S.; Boolchand, P.

    2011-03-01

    Reversibility windows have been observed in modified oxides (alkali-silicates and -germanates) and identified with Intermediate Phases(IPs). Here we find preliminary evidence of an IP in a ternary oxide glass, (B2 O3)5 (Te O2)95-x (V2O5)x , which is composed of network formers. Bulk glasses are synthesized across the 18% x 35 % composition range, and examined in Raman scattering, modulated DSC and molar volume experiments. Glass transition temperatures Tg (x) steadily decrease with V2O5 content x, and reveal the enthalpy of relaxation at Tg to show a global minimum in the 24% x < 27 range, the reversibility window (IP). Molar volumes reveal a minimum in this window. Raman scattering reveals a Boson mode, and at least six other vibrational bands in the 100cm-1 < ν < 1700cm-1 range. Compositional trends in vibrational mode strengths and frequency are established. These results will be presented in relation to glass structure evolution with vanadia content and the underlying elastic phases. Supported by NSF grant DMR 08-53957.

  1. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  2. Full in-vitro analyses of new-generation bulk fill dental composites cured by halogen light.

    Science.gov (United States)

    Tekin, Tuçe Hazal; Kantürk Figen, Aysel; Yılmaz Atalı, Pınar; Coşkuner Filiz, Bilge; Pişkin, Mehmet Burçin

    2017-08-01

    The objective of this study was to investigate the full in-vitro analyses of new-generation bulk-fill dental composites cured by halogen light (HLG). Two types' four composites were studied: Surefill SDR (SDR) and Xtra Base (XB) as bulk-fill flowable materials; QuixFill (QF) and XtraFill (XF) as packable bulk-fill materials. Samples were prepared for each analysis and test by applying the same procedure, but with different diameters and thicknesses appropriate to the analysis and test requirements. Thermal properties were determined by thermogravimetric analysis (TG/DTG) and differential scanning calorimetry (DSC) analysis; the Vickers microhardness (VHN) was measured after 1, 7, 15 and 30days of storage in water. The degree of conversion values for the materials (DC, %) were immediately measured using near-infrared spectroscopy (FT-IR). The surface morphology of the composites was investigated by scanning electron microscopes (SEM) and atomic-force microscopy (AFM) analyses. The sorption and solubility measurements were also performed after 1, 7, 15 and 30days of storage in water. In addition to his, the data were statistically analyzed using one-way analysis of variance, and both the Newman Keuls and Tukey multiple comparison tests. The statistical significance level was established at pfill, resin-based dental composites. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  4. Energy level alignment and sub-bandgap charge generation in polymer:fullerene bulk heterojunction solar cells.

    Science.gov (United States)

    Tsang, Sai-Wing; Chen, Song; So, Franky

    2013-05-07

    Using charge modulated electroabsorption spectroscopy (CMEAS), for the first time, the energy level alignment of a polymer:fullerene bulk heterojunction photovoltaic cell is directly measured. The charge-transfer excitons generated by the sub-bandgap optical pumping are coupled with the modulating electric field and introduce subtle changes in optical absorption in the sub-bandgap region. This minimum required energy for sub-bandgap charge genreation is defined as the effective bandgap. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  6. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  7. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    Science.gov (United States)

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (ppathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  8. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  9. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  10. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  11. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  12. Marginal Gap Formation in Approximal "Bulk Fill" Resin Composite Restorations After Artificial Ageing.

    Science.gov (United States)

    Peutzfeldt, A; Mühlebach, S; Lussi, A; Flury, S

    The aim of this in vitro study was to investigate the marginal gap formation of a packable "regular" resin composite (Filtek Supreme XTE [3M ESPE]) and two flowable "bulk fill" resin composites (Filtek Bulk Fill [3M ESPE] and SDR [DENTSPLY DeTrey]) along the approximal margins of Class II restorations. In each of 39 extracted human molars (n=13 per resin composite), mesial and distal Class II cavities were prepared, placing the gingival margins below the cemento-enamel junction. The cavities were restored with the adhesive system OptiBond FL (Kerr) and one of the three resin composites. After restoration, each molar was cut in half in the oro-vestibular direction between the two restorations, resulting in two specimens per molar. Polyvinylsiloxane impressions were taken and "baseline" replicas were produced. The specimens were then divided into two groups: At the beginning of each month over the course of six months' tap water storage (37°C), one specimen per molar was subjected to mechanical toothbrushing, whereas the other was subjected to thermocycling. After artificial ageing, "final" replicas were produced. Baseline and final replicas were examined under the scanning electron microscope (SEM), and the SEM micrographs were used to determine the percentage of marginal gap formation in enamel or dentin. Paramarginal gaps were registered. The percentages of marginal gap formation were statistically analyzed with a nonparametric analysis of variance followed by Wilcoxon-Mann-Whitney tests and Wilcoxon signed rank tests, and all p-values were corrected with the Bonferroni-Holm adjustment for multiple testing (significance level: α=0.05). Paramarginal gaps were analyzed descriptively. In enamel, significantly lower marginal gap formation was found for Filtek Supreme XTE compared to Filtek Bulk Fill ( p=0.0052) and SDR ( p=0.0289), with no significant difference between Filtek Bulk Fill and SDR ( p=0.4072). In dentin, significantly lower marginal gap formation was

  13. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  14. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  15. Spin diffusion in bulk GaN measured with MnAs spin injector

    KAUST Repository

    Jahangir, Shafat; Dogan, Fatih; Kum, Hyun; Manchon, Aurelien; Bhattacharya, Pallab

    2012-01-01

    Spin injection and precession in bulk wurtzite n-GaN with different doping densities are demonstrated with a ferromagnetic MnAs contact using the three-terminal Hanle measurement technique. Theoretical analysis using minimum fitting parameters indicates that the spin accumulation is primarily in the n-GaN channel rather than at the ferromagnet (FM)/semiconductor (SC) interface states. Spin relaxation in GaN is interpreted in terms of the D’yakonov-Perel mechanism, yielding a maximum spin lifetime of 44 ps and a spin diffusion length of 175 nm at room temperature. Our results indicate that epitaxial ferromagnetic MnAs is a suitable high-temperature spin injector for GaN.

  16. Spin diffusion in bulk GaN measured with MnAs spin injector

    KAUST Repository

    Jahangir, Shafat

    2012-07-16

    Spin injection and precession in bulk wurtzite n-GaN with different doping densities are demonstrated with a ferromagnetic MnAs contact using the three-terminal Hanle measurement technique. Theoretical analysis using minimum fitting parameters indicates that the spin accumulation is primarily in the n-GaN channel rather than at the ferromagnet (FM)/semiconductor (SC) interface states. Spin relaxation in GaN is interpreted in terms of the D’yakonov-Perel mechanism, yielding a maximum spin lifetime of 44 ps and a spin diffusion length of 175 nm at room temperature. Our results indicate that epitaxial ferromagnetic MnAs is a suitable high-temperature spin injector for GaN.

  17. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  18. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  19. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  20. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  1. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  2. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  3. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  4. Diagnosis of Bearing System using Minimum Variance Cepstrum

    International Nuclear Information System (INIS)

    Lee, Jeong Han; Choi, Young Chul; Park, Jin Ho; Lee, Won Hyung; Kim, Chan Joong

    2005-01-01

    Various bearings are commonly used in rotating machines. The noise and vibration signals that can be obtained from the machines often convey the information of faults and these locations. Monitoring conditions for bearings have received considerable attention for many years, because the majority of problems in rotating machines are caused by faulty bearings. Thus failure alarm for the bearing system is often based on the detection of the onset of localized faults. Many methods are available for detecting faults in the bearing system. The majority of these methods assume that faults in bearings produce impulses. Impulse events can be attributed to bearing faults in the system. McFadden and Smith used the bandpass filter to filter the noise signal and then obtained the envelope by using the envelope detector. D. Ho and R. B Randall also tried envelope spectrum to detect faults in the bearing system, but it is very difficult to find resonant frequency in the noisy environments. S. -K. Lee and P. R. White used improved ANC (adaptive noise cancellation) to find faults. The basic idea of this technique is to remove the noise from the measured vibration signal, but they are not able to show the theoretical foundation of the proposed algorithms. Y.-H. Kim et al. used a moving window. This algorithm is quite powerful in the early detection of faults in a ball bearing system, but it is difficult to decide initial time and step size of the moving window. The early fault signal that is caused by microscopic cracks is commonly embedded in noise. Therefore, the success of detecting fault signal is completely determined by a method's ability to distinguish signal and noise. In 1969, Capon coined maximum likelihood (ML) spectra which estimate a mixed spectrum consisting of line spectrum, corresponding to a deterministic random process, plus arbitrary unknown continuous spectrum. The unique feature of these spectra is that it can detect sinusoidal signal from noise. Our idea essentially comes from this method. In this paper, a technique, which can detect impulse embedded in noise, is introduced. The theory of this technique is derived and the improved ability to detect the faults in a ball bearing system is demonstrated theoretically as well as experimentally

  5. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  6. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  7. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  8. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  9. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  10. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  11. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  12. Microhardness of bulk-fill composite materials

    OpenAIRE

    Kelić, Katarina; Matić, Sanja; Marović, Danijela; Klarić, Eva; Tarle, Zrinka

    2016-01-01

    The aim of the study was to determine microhardness of high- and low-viscosity bulk-fill composite resins and compare it with conventional composite materials. Four materials of high-viscosity were tested, including three bulk-fills: QuiXfi l (QF), x-tra fil (XTF) and Tetric EvoCeram Bulk Fill (TEBCF), while nanohybrid composite GrandioSO (GSO) served as control. The other four were low-viscosity composites, three bulk-fill materials: Smart Dentin Replacement (SDR), Venus Bulk Fill (VBF) and ...

  13. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  15. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  16. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  17. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  18. Bulk and surface properties of magnesium peroxide MgO2

    Science.gov (United States)

    Esch, Tobit R.; Bredow, Thomas

    2016-12-01

    Magnesium peroxide has been identified in Mg/air batteries as an intermediate in the oxygen reduction reaction (ORR) [1]. It is assumed that MgO2 is involved in the solid-electrolyte interphase on the cathode surface. Therefore its structure and stability play a crucial role in the performance of Mg/air batteries. In this work we present a theoretical study of the bulk and low-index surface properties of MgO2. All methods give a good account of the experimental lattice parameters for MgO2 and MgO bulk. The reaction energies, enthalpies and free energies for MgO2 formation from MgO are compared among the different DFT methods and with the local MP2 method. A pronounced dependence from the applied functional is found. At variance with a previous theoretical study but in agreement with recent experiments we find that the MgO2 formation reaction is endothermic (HSE06-D3BJ: ΔH = 51.9 kJ/mol). The stability of low-index surfaces MgO2 (001) (Es = 0.96 J/m2) and (011) (Es = 1.98 J/m2) is calculated and compared to the surface energy of MgO (001). The formation energy of neutral oxygen vacancies in the topmost layer of the MgO2 (001) surface is calculated and compared with defect formation energies for MgO (001).

  19. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  20. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  1. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  2. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  3. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  4. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  5. Large area bulk superconductors

    Science.gov (United States)

    Miller, Dean J.; Field, Michael B.

    2002-01-01

    A bulk superconductor having a thickness of not less than about 100 microns is carried by a polycrystalline textured substrate having misorientation angles at the surface thereof not greater than about 15.degree.; the bulk superconductor may have a thickness of not less than about 100 microns and a surface area of not less than about 50 cm.sup.2. The textured substrate may have a thickness not less than about 10 microns and misorientation angles at the surface thereof not greater than about 15.degree.. Also disclosed is a process of manufacturing the bulk superconductor and the polycrystalline biaxially textured substrate material.

  6. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  7. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  8. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  9. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  10. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  11. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  12. Airbreathing Propulsion Fuels and Energy Exploratory Research and Development (APFEERD) Sub Task: Review of Bulk Physical Properties of Synthesized Hydrocarbon:Kerosenes and Blends

    Science.gov (United States)

    2017-06-01

    Fuels and Energy Branch Turbine Engine Division Turbine Engine Division CHARLES W. STEVENS, Lead Engineer Turbine Engine Division Aerospace Systems...evaluation concludes, based on fundamental physical chemistry , that all hydrocarbon kerosenes that meet the minimum density requirement will have bulk...alternative jet fuels; renewable jet fuel; fuel physical properties; fuel chemistry ; fuel properties 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  13. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  14. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  15. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  16. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  17. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  18. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  19. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  20. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  1. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  2. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  3. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  4. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  5. Developing bulk exchange spring magnets

    Science.gov (United States)

    Mccall, Scott K.; Kuntz, Joshua D.

    2017-06-27

    A method of making a bulk exchange spring magnet by providing a magnetically soft material, providing a hard magnetic material, and producing a composite of said magnetically soft material and said hard magnetic material to make the bulk exchange spring magnet. The step of producing a composite of magnetically soft material and hard magnetic material is accomplished by electrophoretic deposition of the magnetically soft material and the hard magnetic material to make the bulk exchange spring magnet.

  6. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  7. Modelling of bulk superconductor magnetization

    International Nuclear Information System (INIS)

    Ainslie, M D; Fujishiro, H

    2015-01-01

    This paper presents a topical review of the current state of the art in modelling the magnetization of bulk superconductors, including both (RE)BCO (where RE = rare earth or Y) and MgB 2 materials. Such modelling is a powerful tool to understand the physical mechanisms of their magnetization, to assist in interpretation of experimental results, and to predict the performance of practical bulk superconductor-based devices, which is particularly important as many superconducting applications head towards the commercialization stage of their development in the coming years. In addition to the analytical and numerical techniques currently used by researchers for modelling such materials, the commonly used practical techniques to magnetize bulk superconductors are summarized with a particular focus on pulsed field magnetization (PFM), which is promising as a compact, mobile and relatively inexpensive magnetizing technique. A number of numerical models developed to analyse the issues related to PFM and optimise the technique are described in detail, including understanding the dynamics of the magnetic flux penetration and the influence of material inhomogeneities, thermal properties, pulse duration, magnitude and shape, and the shape of the magnetization coil(s). The effect of externally applied magnetic fields in different configurations on the attenuation of the trapped field is also discussed. A number of novel and hybrid bulk superconductor structures are described, including improved thermal conductivity structures and ferromagnet–superconductor structures, which have been designed to overcome some of the issues related to bulk superconductors and their magnetization and enhance the intrinsic properties of bulk superconductors acting as trapped field magnets. Finally, the use of hollow bulk cylinders/tubes for shielding is analysed. (topical review)

  8. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  9. Mining the bulk positron lifetime

    International Nuclear Information System (INIS)

    Aourag, H.; Guittom, A.

    2009-01-01

    We introduce a new approach to investigate the bulk positron lifetimes of new systems based on data-mining techniques. Through data mining of bulk positron lifetimes, we demonstrate the ability to predict the positron lifetimes of new semiconductors on the basis of available semiconductor data already studied. Informatics techniques have been applied to bulk positron lifetimes for different tetrahedrally bounded semiconductors in order to discover computational design rules. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  11. Survival of Direct Posterior Composites With and Without a Bulk Fill Base.

    Science.gov (United States)

    McGuirk, C; Hussain, F; Millar, B J

    2017-09-01

    Direct composite restorations are increasingly popular and a flowable bulk-fill base material (SDR, Dentsply) claims to minimise stress through a more flexible polymerisation process. This retrospective audit of restorations placed in general practice compares SDR based restorations with conventional composite restorations. Restorations were all placed by one operator using a similar clinical technique and were audited as Group G, placed with a conventional layering composite (G-aenial, GC) and Group S which had a bulk-fill base of SDR (Dentsply) and then were covered with G-aenial (GC). Data regarding survival, post-operative sensitivity and mode of failure were recorded and analysed. In total 54 Group S restorations and 71 Group G restorations were followed for a minimum of 24 months. Group S had a 92.6% survival and Group G 93%. Group S was more prone to failure by tooth fracture (p=0.033). In both groups failure was more likely in larger cavities, in both those with an increased number of surfaces (p⟨0.001) and cuspal coverage (p=0.004). There appears to be similar survival of the two techniques in the short-term although there were significantly more tooth fractures in teeth restored with SDR. Copyright© 2017 Dennis Barber Ltd.

  12. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  13. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  14. Thermal conductivity engineering of bulk and one-dimensional Si-Ge nanoarchitectures.

    Science.gov (United States)

    Kandemir, Ali; Ozden, Ayberk; Cagin, Tahir; Sevik, Cem

    2017-01-01

    Various theoretical and experimental methods are utilized to investigate the thermal conductivity of nanostructured materials; this is a critical parameter to increase performance of thermoelectric devices. Among these methods, equilibrium molecular dynamics (EMD) is an accurate technique to predict lattice thermal conductivity. In this study, by means of systematic EMD simulations, thermal conductivity of bulk Si-Ge structures (pristine, alloy and superlattice) and their nanostructured one dimensional forms with square and circular cross-section geometries (asymmetric and symmetric) are calculated for different crystallographic directions. A comprehensive temperature analysis is evaluated for selected structures as well. The results show that one-dimensional structures are superior candidates in terms of their low lattice thermal conductivity and thermal conductivity tunability by nanostructuring, such as by diameter modulation, interface roughness, periodicity and number of interfaces. We find that thermal conductivity decreases with smaller diameters or cross section areas. Furthermore, interface roughness decreases thermal conductivity with a profound impact. Moreover, we predicted that there is a specific periodicity that gives minimum thermal conductivity in symmetric superlattice structures. The decreasing thermal conductivity is due to the reducing phonon movement in the system due to the effect of the number of interfaces that determine regimes of ballistic and wave transport phenomena. In some nanostructures, such as nanowire superlattices, thermal conductivity of the Si/Ge system can be reduced to nearly twice that of an amorphous silicon thermal conductivity. Additionally, it is found that one crystal orientation, [Formula: see text]100[Formula: see text], is better than the [Formula: see text]111[Formula: see text] crystal orientation in one-dimensional and bulk SiGe systems. Our results clearly point out the importance of lattice thermal conductivity

  15. Handling of bulk solids theory and practice

    CERN Document Server

    Shamlou, P A

    1990-01-01

    Handling of Bulk Solids provides a comprehensive discussion of the field of solids flow and handling in the process industries. Presentation of the subject follows classical lines of separate discussions for each topic, so each chapter is self-contained and can be read on its own. Topics discussed include bulk solids flow and handling properties; pressure profiles in bulk solids storage vessels; the design of storage silos for reliable discharge of bulk materials; gravity flow of particulate materials from storage vessels; pneumatic transportation of bulk solids; and the hazards of solid-mater

  16. Li-Doped Ionic Liquid Electrolytes: From Bulk Phase to Interfacial Behavior

    Science.gov (United States)

    Haskins, Justin B.; Lawson, John W.

    2016-01-01

    Ionic liquids have been proposed as candidate electrolytes for high-energy density, rechargeable batteries. We present an extensive computational analysis supported by experimental comparisons of the bulk and interfacial properties of a representative set of these electrolytes as a function of Li-salt doping. We begin by investigating the bulk electrolyte using quantum chemistry and ab initio molecular dynamics to elucidate the solvation structure of Li(+). MD simulations using the polarizable force field of Borodin and coworkers were then performed, from which we obtain an array of thermodynamic and transport properties. Excellent agreement is found with experiments for diffusion, ionic conductivity, and viscosity. Combining MD simulations with electronic structure computations, we computed the electrochemical window of the electrolytes across a range of Li(+)-doping levels and comment on the role of the liquid environment. Finally, we performed a suite of simulations of these Li-doped electrolytes at ideal electrified interfaces to evaluate the differential capacitance and the equilibrium Li(+) distribution in the double layer. The magnitude of differential capacitance is in good agreement with our experiments and exhibits the characteristic camel-shaped profile. In addition, the simulations reveal Li(+) to be highly localized to the second molecular layer of the double layer, which is supported by additional computations that find this layer to be a free energy minimum with respect to Li(+) translation.

  17. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  18. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  19. Ferromagnetic bulk glassy alloys

    International Nuclear Information System (INIS)

    Inoue, Akihisa; Makino, Akihiro; Mizushima, Takao

    2000-01-01

    This paper deals with the review on the formation, thermal stability and magnetic properties of the Fe-based bulk glassy alloys in as-cast bulk and melt-spun ribbon forms. A large supercooled liquid region over 50 K before crystallization was obtained in Fe-(Al, Ga)-(P, C, B, Si), Fe-(Cr, Mo, Nb)-(Al, Ga)-(P, C, B) and (Fe, Co, Ni)-Zr-M-B (M=Ti, Hf, V, Nb, Ta, Cr, Mo and W) systems and bulk glassy alloys were produced in a thickness range below 2 mm for the Fe-(Al, Ga)-(P, C, B, Si) system and 6 mm for the Fe-Co-(Zr, Nb, Ta)-(Mo, W)-B system by copper-mold casting. The ring-shaped glassy Fe-(Al, Ga)-(P, C, B, Si) alloys exhibit much better soft magnetic properties as compared with the ring-shaped alloy made from the melt-spun ribbon because of the formation of the unique domain structure. The good combination of high glass-forming ability and good soft magnetic properties indicates the possibility of future development as a new bulk glassy magnetic material

  20. A contribution to problems of clean transport of bulk materials

    Directory of Open Access Journals (Sweden)

    Fedora Jaroslav

    1996-03-01

    Full Text Available The lecture analyses the problem of development of the pipe conveyor with a rubber belt, the facitities of its application in the practice and environmental aspects resulting from its application. The pipe conveyor is a new perspective transport system. It enables ransporting bulk materials (coal, crushed, rock, coke, plant ash, fertilisers, limestones, time in a specific operations (power plants, heating plants.cellulose, salt, sugar, wheat and other materials with a minimum effect on the environment. The transported material is enclosed in the pipeline so that there is no escape of dust, smell or of the transported material itself. The lecture is aimed at: - the short description of the operating principle and design of the pipe conveyor which was developed in the firm Matador Púchov in cooperation with the firm TEDO, - the analysis of experiencie in working some pipe conveyors which were under operation for a certain

  1. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  2. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    Science.gov (United States)

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  3. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  4. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  5. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  9. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  11. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  12. Bulk-Fill Composites: Effectiveness of Cure With Poly- and Monowave Curing Lights and Modes.

    Science.gov (United States)

    Gan, J K; Yap, A U; Cheong, J W; Arista, N; Tan, Cbk

    This study compared the effectiveness of cure of bulk-fill composites using polywave light-emitting diode (LED; with various curing modes), monowave LED, and conventional halogen curing lights. The bulk-fill composites evaluated were Tetric N-Ceram bulk-fill (TNC), which contained a novel germanium photoinitiator (Ivocerin), and Smart Dentin Replacement (SDR). The composites were placed into black polyvinyl molds with cylindrical recesses of 4-mm height and 3-mm diameter and photopolymerized as follows: Bluephase N Polywave High (NH), 1200 mW/cm 2 (10 seconds); Bluephase N Polywave Low (NL), 650 mW/cm 2 (18.5 seconds); Bluephase N Polywave soft-start (NS), 0-650 mW/cm 2 (5 seconds) → 1200 mW/cm 2 (10 seconds); Bluephase N Monowave (NM), 800 mW/cm 2 (15 seconds); QHL75 (QH), 550 mW/cm 2 (21.8 seconds). Total energy output was fixed at 12,000 mJ/cm 2 for all lights/modes, with the exception of NS. The cured specimens were stored in a light-proof container at 37°C for 24 hours, and hardness (Knoop Hardness Number) of the top and bottom surfaces of the specimens was determined using a Knoop microhardness tester (n=6). Hardness data and bottom-to-top hardness ratios were subjected to statistical analysis using one-way analysis of variance/Scheffe's post hoc test at a significance level of 0.05. Hardness ratios ranged from 38.43% ± 5.19% to 49.25% ± 6.38% for TNC and 50.67% ± 1.54% to 67.62% ± 6.96% for SDR. For both bulk-fill composites, the highest hardness ratios were obtained with NM and lowest hardness ratios with NL. While no significant difference in hardness ratios was observed between curing lights/modes for TNC, the hardness ratio obtained with NM was significantly higher than the hardness ratio obtained for NL for SDR.

  13. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  14. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  15. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  16. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  17. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  18. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  19. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  20. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    , a 7 MHz, 128-element, phased array transducer with lambda/2-spacing was used. Data is obtained using a single element as the transmitting aperture and all 128 elements as the receiving aperture. A full SA sequence consisting of 128 emissions was simulated by gliding the active transmitting element...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...... across the array. Data for 13 point targets and a circular cyst with a radius of 5 mm were simulated. The performance of the MV beamformer is compared to DS using boxcar weights and Hanning weights, and is quantified by the Full Width at Half Maximum (FWHM) and the peak-side-lobe level (PSL). Single...

  1. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

  2. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  3. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  4. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  5. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  6. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  7. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  8. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  9. Coupling brane fields to bulk supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Parameswaran, Susha L. [Uppsala Univ. (Sweden). Theoretical Physics; Schmidt, Jonas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-12-15

    In this note we present a simple, general prescription for coupling brane localized fields to bulk supergravity. We illustrate the procedure by considering 6D N=2 bulk supergravity on a 2D orbifold, with brane fields localized at the fixed points. The resulting action enjoys the full 6D N=2 symmetries in the bulk, and those of 4D N=1 supergravity at the brane positions. (orig.)

  10. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  11. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  12. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  13. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  14. Bulk viscosity of spin-one color superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Sa' d, Basil A.

    2009-08-27

    The bulk viscosity of several quark matter phases is calculated. It is found that the effect of color superconductivity is not trivial, it may suppress, or enhance the bulk viscosity depending on the critical temperature and the temperature at which the bulk viscosity is calculated. Also, is it found that the effect of neutrino-emitting Urca processes cannot be neglected in the consideration of the bulk viscosity of strange quark matter. The results for the bulk viscosity of strange quark matter are used to calculate the r-mode instability window of quark stars with several possible phases. It is shown that each possible phase has a different structure for the r-mode instability window. (orig.)

  15. Bulk viscosity of spin-one color superconductors

    International Nuclear Information System (INIS)

    Sa'd, Basil A.

    2009-01-01

    The bulk viscosity of several quark matter phases is calculated. It is found that the effect of color superconductivity is not trivial, it may suppress, or enhance the bulk viscosity depending on the critical temperature and the temperature at which the bulk viscosity is calculated. Also, is it found that the effect of neutrino-emitting Urca processes cannot be neglected in the consideration of the bulk viscosity of strange quark matter. The results for the bulk viscosity of strange quark matter are used to calculate the r-mode instability window of quark stars with several possible phases. It is shown that each possible phase has a different structure for the r-mode instability window. (orig.)

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  18. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  19. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  20. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  1. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  2. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  3. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  4. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  5. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  6. Polymerization Behavior and Mechanical Properties of High-Viscosity Bulk Fill and Low Shrinkage Resin Composites.

    Science.gov (United States)

    Shibasaki, S; Takamizawa, T; Nojiri, K; Imai, A; Tsujimoto, A; Endo, H; Suzuki, S; Suda, S; Barkmeier, W W; Latta, M A; Miyazaki, M

    The present study determined the mechanical properties and volumetric polymerization shrinkage of different categories of resin composite. Three high viscosity bulk fill resin composites were tested: Tetric EvoCeram Bulk Fill (TB, Ivoclar Vivadent), Filtek Bulk Fill posterior restorative (FB, 3M ESPE), and Sonic Fill (SF, Kerr Corp). Two low-shrinkage resin composites, Kalore (KL, GC Corp) and Filtek LS Posterior (LS, 3M ESPE), were used. Three conventional resin composites, Herculite Ultra (HU, Kerr Corp), Estelite ∑ Quick (EQ, Tokuyama Dental), and Filtek Supreme Ultra (SU, 3M ESPE), were used as comparison materials. Following ISO Specification 4049, six specimens for each resin composite were used to determine flexural strength, elastic modulus, and resilience. Volumetric polymerization shrinkage was determined using a water-filled dilatometer. Data were evaluated using analysis of variance followed by Tukey's honestly significant difference test (α=0.05). The flexural strength of the resin composites ranged from 115.4 to 148.1 MPa, the elastic modulus ranged from 5.6 to 13.4 GPa, and the resilience ranged from 0.70 to 1.0 MJ/m 3 . There were significant differences in flexural properties between the materials but no clear outliers. Volumetric changes as a function of time over a duration of 180 seconds depended on the type of resin composite. However, for all the resin composites, apart from LS, volumetric shrinkage began soon after the start of light irradiation, and a rapid decrease in volume during light irradiation followed by a slower decrease was observed. The low shrinkage resin composites KL and LS showed significantly lower volumetric shrinkage than the other tested materials at the measuring point of 180 seconds. In contrast, the three bulk fill resin composites showed higher volumetric change than the other resin composites. The findings from this study provide clinicians with valuable information regarding the mechanical properties and

  7. Effect of preheating and light-curing unit on physicochemical properties of a bulk fill composite

    Directory of Open Access Journals (Sweden)

    Theobaldo JD

    2017-05-01

    Full Text Available Jéssica Dias Theobaldo,1 Flávio Henrique Baggio Aguiar,1 Núbia Inocencya Pavesi Pini,2 Débora Alves Nunes Leite Lima,1 Priscila Christiane Suzy Liporoni,3 Anderson Catelan3 1Department of Restorative Dentistry, Piracicaba Dental School, University of Campinas, Piracicaba, 2Ingá University Center, Maringá, 3Departament of Dentistry, University of Taubaté, Taubaté, Brazil Objective: The aim of this study is to evaluate the effect of composite preheating and polymerization mode on degree of conversion (DC, microhardness (KHN, plasticization (P, and depth of polymerization (DP of a bulk fill composite.Methods: Forty disc-shaped samples (n = 5 of a bulk fill composite were prepared (5 × 4 mm thick and randomly divided into 4 groups according to light-curing unit (quartz–tungsten–halogen [QTH] or light-emitting diode [LED] and preheating temperature (23 or 54 °C. A control group was prepared with a flowable composite at room temperature. DC was determined using a Fourier transform infrared spectrometer, KHN was measured with a Knoop indenter, P was evaluated by percentage reduction of hardness after 24 h of ethanol storage, and DP was obtained by bottom/top ratio. Data were statistically analyzed by analysis of variance and Tukey’s test (α = 0.05.Results: Regardless of light-curing, the highest preheating temperature increased DC compared to room temperature on bottom surface. LED showed a higher DC compared to QTH. Overall, DC was higher on top surface than bottom. KHN, P, and DP were not affected by curing mode and temperature, and flowable composite showed similar KHN, and lower DC and P, compared to bulk fill.Conclusion: Composite preheating increased the polymerization degree of 4-mm-increment bulk fill, but it led to a higher plasticization compared to the conventional flowable composite evaluated. Keywords: composite resins, physicochemical phenomena, polymerization, hardness, heating

  8. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  9. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  10. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  11. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  12. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  13. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  14. 78 FR 72841 - List of Bulk Drug Substances That May Be Used in Pharmacy Compounding; Bulk Drug Substances That...

    Science.gov (United States)

    2013-12-04

    .... FDA-2013-N-1525] List of Bulk Drug Substances That May Be Used in Pharmacy Compounding; Bulk Drug... proposed rule to list bulk drug substances used in pharmacy compounding and preparing to develop a list of... Formulary monograph, if a monograph exists, and the United States Pharmacopoeia chapter on pharmacy...

  15. Bulk viscosity in holographic Lifshitz hydrodynamics

    International Nuclear Information System (INIS)

    Hoyos, Carlos; Kim, Bom Soo; Oz, Yaron

    2014-01-01

    We compute the bulk viscosity in holographic models dual to theories with Lifshitz scaling and/or hyperscaling violation, using a generalization of the bulk viscosity formula derived in arXiv:1103.1657 from the null focusing equation. We find that only a class of models with massive vector fields are truly Lifshitz scale invariant, and have a vanishing bulk viscosity. For other holographic models with scalars and/or massless vector fields we find a universal formula in terms of the dynamical exponent and the hyperscaling violation exponent

  16. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  17. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  18. Longitudinal and bulk viscosities of expanded rubidium

    International Nuclear Information System (INIS)

    Zaheri, Ali Hossein Mohammad; Srivastava, Sunita; Tankeshwar, K

    2003-01-01

    First three non-vanishing sum rules for the bulk and longitudinal stress auto-correlation functions have been evaluated for liquid Rb at six thermodynamic states along the liquid-vapour coexistence curve. The Mori memory function formalism and the frequency sum rules have been used to calculate bulk and longitudinal viscosities. The results thus obtained for the ratio of bulk viscosity to shear viscosity have been compared with experimental and other theoretical predictions wherever available. The values of the bulk viscosity have been found to be more than the corresponding values of the shear viscosity for all six thermodynamic states investigated here

  19. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  20. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  1. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  2. Physiological minimum temperatures for root growth in seven common European broad-leaved tree species.

    Science.gov (United States)

    Schenker, Gabriela; Lenz, Armando; Körner, Christian; Hoch, Günter

    2014-03-01

    Temperature is the most important factor driving the cold edge distribution limit of temperate trees. Here, we identified the minimum temperatures for root growth in seven broad-leaved tree species, compared them with the species' natural elevational limits and identified morphological changes in roots produced near their physiological cold limit. Seedlings were exposed to a vertical soil-temperature gradient from 20 to 2 °C along the rooting zone for 18 weeks. In all species, the bulk of roots was produced at temperatures above 5 °C. However, the absolute minimum temperatures for root growth differed among species between 2.3 and 4.2 °C, with those species that reach their natural distribution limits at higher elevations also tending to have lower thermal limits for root tissue formation. In all investigated species, the roots produced at temperatures close to the thermal limit were pale, thick, unbranched and of reduced mechanical strength. Across species, the specific root length (m g(-1) root) was reduced by, on average, 60% at temperatures below 7 °C. A significant correlation of minimum temperatures for root growth with the natural high elevation limits of the investigated species indicates species-specific thermal requirements for basic physiological processes. Although these limits are not necessarily directly causative for the upper distribution limit of a species, they seem to belong to a syndrome of adaptive processes for life at low temperatures. The anatomical changes at the cold limit likely hint at the mechanisms impeding meristematic activity at low temperatures.

  3. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  4. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  5. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  6. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  7. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  8. Locality, bulk equations of motion and the conformal bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Kabat, Daniel [Department of Physics and Astronomy, Lehman College, City University of New York,250 Bedford Park Blvd. W, Bronx NY 10468 (United States); Lifschytz, Gilad [Department of Mathematics, Faculty of Natural Science, University of Haifa,199 Aba Khoushy Ave., Haifa 31905 (Israel)

    2016-10-18

    We develop an approach to construct local bulk operators in a CFT to order 1/N{sup 2}. Since 4-point functions are not fixed by conformal invariance we use the OPE to categorize possible forms for a bulk operator. Using previous results on 3-point functions we construct a local bulk operator in each OPE channel. We then impose the condition that the bulk operators constructed in different channels agree, and hence give rise to a well-defined bulk operator. We refer to this condition as the “bulk bootstrap.” We argue and explicitly show in some examples that the bulk bootstrap leads to some of the same results as the regular conformal bootstrap. In fact the bulk bootstrap provides an easier way to determine some CFT data, since it does not require knowing the form of the conformal blocks. This analysis clarifies previous results on the relation between bulk locality and the bootstrap for theories with a 1/N expansion, and it identifies a simple and direct way in which OPE coefficients and anomalous dimensions determine the bulk equations of motion to order 1/N{sup 2}.

  9. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  10. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  11. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  12. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  13. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  14. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  15. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  16. Bulk temperature measurement in thermally striped pipe flows

    International Nuclear Information System (INIS)

    Lemure, N.; Olvera, J.R.; Ruggles, A.E.

    1995-12-01

    The hot leg flows in some Pressurized Water Reactor (PWR) designs have a temperature distribution across the pipe cross-section. This condition is often referred to as a thermally striped flow. Here, the bulk temperature measurement of pipe flows with thermal striping is explored. An experiment is conducted to examine the feasibility of using temperature measurements on the external surface of the pipe to estimate the bulk temperature of the flow. Simple mixing models are used to characterize the development of the temperature profile in the flow. Simple averaging techniques and Backward Propagating Neural Net are used to predict bulk temperature from the external temperature measurements. Accurate bulk temperatures can be predicted. However, some temperature distributions in the flow effectively mask the bulk temperature from the wall and cause significant error in the bulk temperature predicted using this technique

  17. Transfer points of belt conveyors operating with unfavorable bulk

    Energy Technology Data Exchange (ETDEWEB)

    Goehring, H [Technische Universitaet, Dresden (German Democratic Republic)

    1989-06-01

    Describes design of belt conveyor chutes that transfer bulk of surface mines from one conveyor to another. Conveyor belt velocity is a significant parameter. Unfavorable chute design may lead to bulk flow congestion, bulk velocity losses etc. The bulk flow process is analyzed, bulk flow velocities, belt inclinations and bulk feeding from 2 conveyors into one chute are taken into account. Conventional chutes have parabolic belt impact walls. An improved version with divided impact walls is proposed that maintains a relatively high bulk velocity, reduces friction at chute walls and decreases wear and dirt build-up. Design of the improved chute is explained. It is built to adapt to existing structures without major modifications. The angle between 2 belt conveyors can be up to 90 degrees, the best bulk transfer is noted at conveyor angles below 60 degrees. Various graphs and schemes are provided. 6 refs.

  18. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  19. rf Quantum Capacitance of the Topological Insulator Bi2Se3 in the Bulk Depleted Regime for Field-Effect Transistors

    Science.gov (United States)

    Inhofer, A.; Duffy, J.; Boukhicha, M.; Bocquillon, E.; Palomo, J.; Watanabe, K.; Taniguchi, T.; Estève, I.; Berroir, J. M.; Fève, G.; Plaçais, B.; Assaf, B. A.

    2018-02-01

    A metal-dielectric topological-insulator capacitor device based on hexagonal-boron-nitrate- (h -BN) encapsulated CVD-grown Bi2Se3 is realized and investigated in the radio-frequency regime. The rf quantum capacitance and device resistance are extracted for frequencies as high as 10 GHz and studied as a function of the applied gate voltage. The superior quality h -BN gate dielectric combined with the optimized transport characteristics of CVD-grown Bi2Se3 (n ˜1018 cm-3 in 8 nm) on h -BN allow us to attain a bulk depleted regime by dielectric gating. A quantum-capacitance minimum and a linear variation of the capacitance with the chemical potential are observed revealing a Dirac regime. The topological surface state in proximity to the gate is seen to reach charge neutrality, but the bottom surface state remains charged and capacitively coupled to the top via the insulating bulk. Our work paves the way toward implementation of topological materials in rf devices.

  20. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  1. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  2. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  3. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  4. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  5. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  6. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  7. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  8. ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI

    Directory of Open Access Journals (Sweden)

    Azizul Khakim

    2015-10-01

    Full Text Available ABSTRAK ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI. Bulk shielding merupakan fasilitas yang terintegrasi dengan reaktor Kartini yang berfungsi sebagai penyimpanan sementara bahan bakar bekas. Fasilitas ini merupakan fasilitas yang termasuk dalam struktur, sistem dan komponen (SSK yang penting bagi keselamatan. Salah satu fungsi keselamatan dari sistem penanganan dan penyimpanan bahan bakar adalah mencegah kecelakaan kekritisan yang tak terkendali dan membatasi naiknya temperatur bahan bakar. Analisis keselamatan paling kurang harus mencakup analisis keselamatan dari sisi neutronik dan termo hidrolik Bulk shielding. Analisis termo hidrolik ditujukan untuk memastikan perpindahan panas dan proses pendinginan bahan bakar bekas berjalan baik dan tidak terjadi akumulasi panas yang mengancam integritas bahan bakar. Code tervalidasi PARET/ANL digunakan untuk analisis pendinginan dengan mode konveksi alam. Hasil perhitungan menunjukkan bahwa mode pendinginan konvekasi alam cukup memadai dalam mendinginkan panas sisa tanpa mengakibatkan kenaikan temperatur bahan bakar yang signifikan. Kata kunci: Bulk shielding, bahan bakar bekas, konveksi alam, PARET.   ABSTRACT THERMAL HYDRAULIC SAFETY ANALYSIS OF BULK SHIELDING KARTINI REACTOR. Bulk shielding is an integrated facility to Kartini reactor which is used for temporary spent fuels storage. The facility is one of the structures, systems and components (SSCs important to safety. Among the safety functions of fuel handling and storage are to prevent any uncontrolable criticality accidents and to limit the fuel temperature increase. Safety analyses should, at least, cover neutronic and thermal hydraulic calculations of the bulk shielding. Thermal hydraulic analyses were intended to ensure that heat removal and the process of the spent fuels cooling takes place adequately and no heat accumulation that challenges the fuel integrity. Validated code, PARET/ANL was used for analysing the

  9. Superductile bulk metallic glass

    International Nuclear Information System (INIS)

    Yao, K.F.; Ruan, F.; Yang, Y.Q.; Chen, N.

    2006-01-01

    Usually, monolithic bulk metallic glasses undergo inhomogeneous plastic deformation and exhibit poor ductility (<2%) at room temperature. We report a newly developed Pd-Si binary bulk metallic glass, which exhibits a uniform plastic deformation and a large plastic engineering strain of 82% and a plastic true strain of 170%, together with initial strain hardening, slight strain softening and final strain hardening characteristics. The uniform shear deformation and the ultrahigh plasticity are mainly attributed to strain hardening, which results from the nanoscale inhomogeneity due to liquid phase separation. The formed nanoscale inhomogeneity will hinder, deflect, and bifurcate the propagation of shear bands

  10. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  11. Adaptive color halftoning for minimum perceived error using the blue noise mask

    Science.gov (United States)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  12. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  13. Monoterpene oxidation in an oxidative flow reactor: SOA yields and the relationship between bulk gas-phase properties and organic aerosol growth

    Science.gov (United States)

    Friedman, B.; Link, M.; Farmer, D.

    2016-12-01

    We use an oxidative flow reactor (OFR) to determine the secondary organic aerosol (SOA) yields of five monoterpenes (alpha-pinene, beta-pinene, limonene, sabinene, and terpinolene) at a range of OH exposures. These OH exposures correspond to aging timescales of a few hours to seven days. We further determine how SOA yields of beta-pinene and alpha-pinene vary as a function of seed particle type (organic vs. inorganic) and seed particle mass concentration. We hypothesize that the monoterpene structure largely accounts for the observed variance in SOA yields for the different monoterpenes. We also use high-resolution time-of-flight chemical ionization mass spectrometry to calculate the bulk gas-phase properties (O:C and H:C) of the monoterpene oxidation systems as a function of oxidant concentrations. Bulk gas-phase properties can be compared to the SOA yields to assess the capability of the precursor gas-phase species to inform the SOA yields of each monoterpene oxidation system. We find that the extent of oxygenated precursor gas-phase species corresponds to SOA yield.

  14. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  15. Acetabular Reconstruction with the Burch-Schneider Antiprotrusio Cage and Bulk Allografts: Minimum 10-Year Follow-Up Results

    Directory of Open Access Journals (Sweden)

    Dario Regis

    2014-01-01

    Full Text Available Reconstruction of severe pelvic bone loss is a challenging problem in hip revision surgery. Between January 1992 and December 2000, 97 hips with periprosthetic osteolysis underwent acetabular revision using bulk allografts and the Burch-Schneider antiprotrusio cage (APC. Twenty-nine patients (32 implants died for unrelated causes without additional surgery. Sixty-five hips were available for clinical and radiographic assessment at an average follow-up of 14.6 years (range, 10.0 to 18.9 years. There were 16 male and 49 female patients, aged from 29 to 83 (median, 60 years, with Paprosky IIIA (27 cases and IIIB (38 cases acetabular bone defects. Nine cages required rerevision because of infection (3, aseptic loosening (5, and flange breakage (1. The average Harris hip score improved from 33.1 points preoperatively to 75.6 points at follow-up (P<0.001. Radiographically, graft incorporation and cage stability were detected in 48 and 52 hips, respectively. The cumulative survival rates at 18.9 years with removal for any reason or X-ray migration of the cage and aseptic or radiographic loosening as the end points were 80.0% and 84.6%, respectively. The use of the Burch-Schneider APC and massive allografts is an effective technique for the reconstructive treatment of extensive acetabular bone loss with long-lasting survival.

  16. Bulk viscosity of molecular fluids

    Science.gov (United States)

    Jaeger, Frederike; Matar, Omar K.; Müller, Erich A.

    2018-05-01

    The bulk viscosity of molecular models of gases and liquids is determined by molecular simulations as a combination of a dilute gas contribution, arising due to the relaxation of internal degrees of freedom, and a configurational contribution, due to the presence of intermolecular interactions. The dilute gas contribution is evaluated using experimental data for the relaxation times of vibrational and rotational degrees of freedom. The configurational part is calculated using Green-Kubo relations for the fluctuations of the pressure tensor obtained from equilibrium microcanonical molecular dynamics simulations. As a benchmark, the Lennard-Jones fluid is studied. Both atomistic and coarse-grained force fields for water, CO2, and n-decane are considered and tested for their accuracy, and where possible, compared to experimental data. The dilute gas contribution to the bulk viscosity is seen to be significant only in the cases when intramolecular relaxation times are in the μs range, and for low vibrational wave numbers (<1000 cm-1); This explains the abnormally high values of bulk viscosity reported for CO2. In all other cases studied, the dilute gas contribution is negligible and the configurational contribution dominates the overall behavior. In particular, the configurational term is responsible for the enhancement of the bulk viscosity near the critical point.

  17. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  18. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  19. Youth minimum wages and youth employment

    NARCIS (Netherlands)

    Marimpi, Maria; Koning, Pierre

    2018-01-01

    This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median

  20. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  1. Discretization of space and time: determining the values of minimum length and minimum time

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .

  2. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  3. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  4. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  5. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  6. Unified bulk-boundary correspondence for band insulators

    Science.gov (United States)

    Rhim, Jun-Won; Bardarson, Jens H.; Slager, Robert-Jan

    2018-03-01

    The bulk-boundary correspondence, a topic of intensive research interest over the past decades, is one of the quintessential ideas in the physics of topological quantum matter. Nevertheless, it has not been proven in all generality and has in certain scenarios even been shown to fail, depending on the boundary profiles of the terminated system. Here, we introduce bulk numbers that capture the exact number of in-gap modes, without any such subtleties in one spatial dimension. Similarly, based on these 1D bulk numbers, we define a new 2D winding number, which we call the pole winding number, that specifies the number of robust metallic surface bands in the gap as well as their topological character. The underlying general methodology relies on a simple continuous extrapolation from the bulk to the boundary, while tracking the evolution of Green's function's poles in the vicinity of the bulk band edges. As a main result we find that all the obtained numbers can be applied to the known insulating phases in a unified manner regardless of the specific symmetries. Additionally, from a computational point of view, these numbers can be effectively evaluated without any gauge fixing problems. In particular, we directly apply our bulk-boundary correspondence construction to various systems, including 1D examples without a traditional bulk-boundary correspondence, and predict the existence of boundary modes on various experimentally studied graphene edges, such as open boundaries and grain boundaries. Finally, we sketch the 3D generalization of the pole winding number by in the context of topological insulators.

  7. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  8. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  9. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  10. 46 CFR 148.04-23 - Unslaked lime in bulk.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Unslaked lime in bulk. 148.04-23 Section 148.04-23... HAZARDOUS MATERIALS IN BULK Special Additional Requirements for Certain Material § 148.04-23 Unslaked lime in bulk. (a) Unslaked lime in bulk must be transported in unmanned, all steel, double-hulled barges...

  11. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  12. Minimum emittance of three-bend achromats

    International Nuclear Information System (INIS)

    Li Xiaoyu; Xu Gang

    2012-01-01

    The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)

  13. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  14. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  15. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  16. Electron and positron contributions to the displacement per atom profile in bulk multi-walled carbon nanotube material irradiated with gamma rays

    International Nuclear Information System (INIS)

    Leyva Fabelo, Antonio; Pinnera Hernandez, Ibrahin; Leyva Pernia, Diana

    2013-01-01

    The electron and positron contributions to the effective atom displacement cross-section in multi-walled carbon nanotube bulk materials exposed to gamma rays were calculated. The physical properties and the displacement threshold energy value reported in literature for this material were taken into account. Then, using the mathematical simulation of photon and particle transport in matter, the electron and positron energy flux distributions within the irradiated object were also calculated. Finally, considering both results, the atom displacement damage profiles inside the analyzed bulk carbon nanotube material were determined. The individual contribution from each type of secondary particles generated by the photon interactions was specified. An increasing behavior of the displacement cross-sections for all the studied particles energy range was observed. The particles minimum kinetic energy values that make probabilistically possible the single and multiple atom displacement processes were determined. The positrons contribution importance to the total number of point defects generated during the interaction of gamma rays with the studied materials was confirmed

  17. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  18. Bulk solitary waves in elastic solids

    Science.gov (United States)

    Samsonov, A. M.; Dreiden, G. V.; Semenova, I. V.; Shvartz, A. G.

    2015-10-01

    A short and object oriented conspectus of bulk solitary wave theory, numerical simulations and real experiments in condensed matter is given. Upon a brief description of the soliton history and development we focus on bulk solitary waves of strain, also known as waves of density and, sometimes, as elastic and/or acoustic solitons. We consider the problem of nonlinear bulk wave generation and detection in basic structural elements, rods, plates and shells, that are exhaustively studied and widely used in physics and engineering. However, it is mostly valid for linear elasticity, whereas dynamic nonlinear theory of these elements is still far from being completed. In order to show how the nonlinear waves can be used in various applications, we studied the solitary elastic wave propagation along lengthy wave guides, and remarkably small attenuation of elastic solitons was proven in physical experiments. Both theory and generation for strain soliton in a shell, however, remained unsolved problems until recently, and we consider in more details the nonlinear bulk wave propagation in a shell. We studied an axially symmetric deformation of an infinite nonlinearly elastic cylindrical shell without torsion. The problem for bulk longitudinal waves is shown to be reducible to the one equation, if a relation between transversal displacement and the longitudinal strain is found. It is found that both the 1+1D and even the 1+2D problems for long travelling waves in nonlinear solids can be reduced to the Weierstrass equation for elliptic functions, which provide the solitary wave solutions as appropriate limits. We show that the accuracy in the boundary conditions on free lateral surfaces is of crucial importance for solution, derive the only equation for longitudinal nonlinear strain wave and show, that the equation has, amongst others, a bidirectional solitary wave solution, which lead us to successful physical experiments. We observed first the compression solitary wave in the

  19. Module 13: Bulk Packaging Shipments by Highway

    International Nuclear Information System (INIS)

    Przybylski, J.L.

    1994-07-01

    The Hazardous Materials Modular Training Program provides participating United States Department of Energy (DOE) sites with a basic, yet comprehensive, hazardous materials transportation training program for use onsite. This program may be used to assist individual program entities to satisfy the general awareness, safety training, and function specific training requirements addressed in Code of Federal Regulation (CFR), Title 49, Part 172, Subpart H -- ''Training.'' Module 13 -- Bulk Packaging Shipments by Highway is a supplement to the Basic Hazardous Materials Workshop. Module 13 -- Bulk Packaging Shipments by Highway focuses on bulk shipments of hazardous materials by highway mode, which have additional or unique requirements beyond those addressed in the ten module core program. Attendance in this course of instruction should be limited to those individuals with work experience in transporting hazardous materials utilizing bulk packagings and who have completed the Basic Hazardous Materials Workshop or an equivalent. Participants will become familiar with the rules and regulations governing the transportation by highway of hazardous materials in bulk packagings and will demonstrate the application of these requirements through work projects and examination

  20. 27 CFR 20.191 - Bulk articles.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bulk articles. 20.191... Users of Specially Denatured Spirits Operations by Users § 20.191 Bulk articles. Users who convey articles in containers exceeding one gallon may provide the recipient with a photocopy of subpart G of this...

  1. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  2. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  3. Bulk metallic glass matrix composites

    International Nuclear Information System (INIS)

    Choi-Yim, H.; Johnson, W.L.

    1997-01-01

    Composites with a bulk metallic glass matrix were synthesized and characterized. This was made possible by the recent development of bulk metallic glasses that exhibit high resistance to crystallization in the undercooled liquid state. In this letter, experimental methods for processing metallic glass composites are introduced. Three different bulk metallic glass forming alloys were used as the matrix materials. Both ceramics and metals were introduced as reinforcement into the metallic glass. The metallic glass matrix remained amorphous after adding up to a 30 vol% fraction of particles or short wires. X-ray diffraction patterns of the composites show only peaks from the second phase particles superimposed on the broad diffuse maxima from the amorphous phase. Optical micrographs reveal uniformly distributed particles in the matrix. The glass transition of the amorphous matrix and the crystallization behavior of the composites were studied by calorimetric methods. copyright 1997 American Institute of Physics

  4. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  5. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  6. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  7. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  8. 30 CFR 57.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...

  9. 30 CFR 56.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  10. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  11. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  12. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  13. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  14. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    Science.gov (United States)

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  15. 30 CFR 77.1431 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  16. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  17. Bulk-viscosity-driven asymmetric inflationary universe

    International Nuclear Information System (INIS)

    Waga, I.; Lima, J.A.S.; Portugal, R.

    1987-01-01

    A primordial net bosinic charge is introduced in the context of the bulk-viscosity-driven inflationary models. The analysis is carried through a macroscopic point of view in the framework of the causal thermodynamic theory. The conditions for having exponetial and generalized inflation are obtained. A phenomenological expression for the bulk viscosity coefficient is also derived. (author) [pt

  18. Simulation of bulk aerosol direct radiative effects and its climatic feedbacks in South Africa using RegCM4

    Science.gov (United States)

    Tesfaye, M.; Botai, J.; Sivakumar, V.; Mengistu Tsidu, G.; Rautenbach, C. J. deW.; Moja, Shadung J.

    2016-05-01

    In this study, 12 year runs of the Regional Climate Model (RegCM4) have been used to analyze the bulk aerosol radiative effects and its climatic feedbacks in South Africa. Due to the geographical locations where the aerosol potential source regions are situated and the regional dynamics, the South African aerosol spatial-distribution has a unique feature. Across the west and southwest areas, desert dust particles are dominant. However, sulfate and carbonaceous aerosols are primarily distributed over the east and northern regions of the country. Analysis of the Radiative Effects (RE) shows that in South Africa the bulk aerosols play a role in reducing the net radiation absorbed by the surface via enhancing the net radiative heating in the atmosphere. Hence, across all seasons, the bulk aerosol-radiation-climate interaction induced statistically significant positive feedback on the net atmospheric heating rate. Over the western and central parts of South Africa, the overall radiative feedbacks of bulk aerosol predominantly induces statistically significant Cloud Cover (CC) enhancements. Whereas, over the east and southeast coastal areas, it induces minimum reductions in CC. The CC enhancement and RE of aerosols jointly induce radiative cooling at the surface which in turn results in the reduction of Surface Temperature (ST: up to -1 K) and Surface Sensible Heat Flux (SSHF: up to -24 W/m2). The ST and SSHF decreases cause a weakening of the convectively driven turbulences and surface buoyancy fluxes which lead to the reduction of the boundary layer height, surface pressure enhancement and dynamical changes. Throughout the year, the maximum values of direct and semi-direct effects of bulk aerosol were found in areas of South Africa which are dominated by desert dust particles. This signals the need for a strategic regional plan on how to reduce the dust production and monitoring of the dust dispersion as well as it initiate the need of further research on different

  19. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  20. A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru

    Science.gov (United States)

    Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.

    2016-02-01

    The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.

  1. Torsional shear flow of granular materials: shear localization and minimum energy principle

    Science.gov (United States)

    Artoni, Riccardo; Richard, Patrick

    2018-01-01

    The rheological properties of granular matter submitted to torsional shear are investigated numerically by means of discrete element method. The shear cell is made of a cylinder filled by grains which are sheared by a bumpy bottom and submitted to a vertical pressure which is applied at the top. Regimes differing by their strain localization features are observed. They originate from the competition between dissipation at the sidewalls and dissipation in the bulk of the system. The effects of the (i) the applied pressure, (ii) sidewall friction, and (iii) angular velocity are investigated. A model, based on the purely local μ (I)-rheology and a minimum energy principle is able to capture the effect of the two former quantities but unable to account the effect of the latter. Although, an ad hoc modification of the model allows to reproduce all the numerical results, our results point out the need for an alternative rheology.

  2. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  3. Bulk velocity extraction for nano-scale Newtonian flows

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenfei, E-mail: zwenfei@gmail.com [Key Laboratory of Mechanical Reliability for Heavy Equipments and Large Structures of Hebei Province, Yanshan University, Qinhuangdao 066004 (China); Sun, Hongyu [Key Laboratory of Mechanical Reliability for Heavy Equipments and Large Structures of Hebei Province, Yanshan University, Qinhuangdao 066004 (China)

    2012-04-16

    The conventional velocity extraction algorithm in MDS method has difficulty to determine the small flow velocity. This study proposes a new method to calculate the bulk velocity in nano-flows. Based on the Newton's law of viscosity, according to the calculated viscosities and shear stresses, the flow velocity can be obtained by numerical integration. This new method can overcome the difficulty existed in the conventional MDS method and improve the stability of the computational process. Numerical results show that this method is effective for the extraction of bulk velocity, no matter the bulk velocity is large or small. -- Highlights: ► Proposed a new method to calculate the bulk velocity in nano-flows. ► It is effective for the extraction of small bulk velocity. ► The accuracy, convergence and stability of the new method is good.

  4. Bulk velocity extraction for nano-scale Newtonian flows

    International Nuclear Information System (INIS)

    Zhang, Wenfei; Sun, Hongyu

    2012-01-01

    The conventional velocity extraction algorithm in MDS method has difficulty to determine the small flow velocity. This study proposes a new method to calculate the bulk velocity in nano-flows. Based on the Newton's law of viscosity, according to the calculated viscosities and shear stresses, the flow velocity can be obtained by numerical integration. This new method can overcome the difficulty existed in the conventional MDS method and improve the stability of the computational process. Numerical results show that this method is effective for the extraction of bulk velocity, no matter the bulk velocity is large or small. -- Highlights: ► Proposed a new method to calculate the bulk velocity in nano-flows. ► It is effective for the extraction of small bulk velocity. ► The accuracy, convergence and stability of the new method is good.

  5. Ulnar variance: its relationship to ulnar foveal morphology and forearm kinematics.

    Science.gov (United States)

    Kataoka, Toshiyuki; Moritomo, Hisao; Omokawa, Shohei; Iida, Akio; Murase, Tsuyoshi; Sugamoto, Kazuomi

    2012-04-01

    It is unclear how individual differences in the anatomy of the distal ulna affect kinematics and pathology of the distal radioulnar joint. This study evaluated how ulnar variance relates to ulnar foveal morphology and the pronosupination axis of the forearm. We performed 3-dimensional computed tomography studies in vivo on 28 forearms in maximum supination and pronation to determine the anatomical center of the ulnar distal pole and the forearm pronosupination axis. We calculated the forearm pronosupination axis using a markerless bone registration technique, which determined the pronosupination center as the point where the axis emerges on the distal ulnar surface. We measured the depth of the anatomical center and classified it into 2 types: concave, with a depth of 0.8 mm or more, and flat, with a depth less than 0.8 mm. We examined whether ulnar variance correlated with foveal type and the distance between anatomical and pronosupination centers. A total of 18 cases had a concave-type fovea surrounded by the C-shaped articular facet of the distal pole, and 10 had a flat-type fovea with a flat surface without evident central depression. Ulnar variance of the flat type was 3.5 ± 1.2 mm, which was significantly greater than the 1.2 ± 1.1 mm of the concave type. Ulnar variance positively correlated with distance between the anatomical and pronosupination centers. Flat-type ulnar heads have a significantly greater ulnar variance than concave types. The pronosupination axis passes through the ulnar head more medially and farther from the anatomical center with increasing ulnar variance. This study suggests that ulnar variance is related in part to foveal morphology and pronosupination axis. This information provides a starting point for future studies investigating how foveal morphology relates to distal ulnar problems. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  6. Brane Lorentz symmetry from Lorentz breaking in the bulk

    Energy Technology Data Exchange (ETDEWEB)

    Bertolami, O [Departamento de Fisica, Instituto Superior Tecnico, Avenida Rovisco Pais 1, 1049-001 Lisbon (Portugal); Carvalho, C [Departamento de Fisica, Instituto Superior Tecnico, Avenida Rovisco Pais 1, 1049-001 Lisbon (Portugal)

    2007-05-15

    We propose the mechanism of spontaneous symmetry breaking of a bulk vector field as a way to generate the selection of bulk dimensions invisible to the standard model confined to the brane. By assigning a nonvanishing vacuum value to the vector field, a direction is singled out in the bulk vacuum, thus breaking the bulk Lorentz symmetry. We present the condition for induced Lorentz symmetry on the brane, as phenomenologically required.

  7. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  8. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  9. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  10. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  11. Large-scale HTS bulks for magnetic application

    Science.gov (United States)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500-3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  12. Characterisation of bulk solids

    Energy Technology Data Exchange (ETDEWEB)

    D. McGlinchey [Glasgow Caledonian University, Glasgow (United Kingdom). Centre for Industrial Bulk Solids Handling

    2005-07-01

    Handling of powders and bulk solids is a critical industrial technology across a broad spectrum of industries, including minerals processing. With contributions from leading authors in their respective fields, this book provides the reader with a sound understanding of the techniques, importance and application of particulate materials characterisation. It covers the fundamental characteristics of individual particles and bulk particulate materials, and includes discussion of a wide range of measurement techniques, and the use of material characteristics in design and industrial practice. Contents: Characterising particle properties; Powder mechanics and rheology; Characterisation for hopper and stockpile design; Fluidization behaviour; Characterisation for pneumatic conveyor design; Explosiblility; 'Designer' particle characteristics; Current industrial practice; and Future trends. 130 ills.

  13. Effect of fiber inserts on gingival margin microleakage of Class II bulk-fill composite resin restorations.

    Science.gov (United States)

    Shafiei, Fereshteh; Doozandeh, Maryam; Karimi, Vahid

    2018-01-01

    This study evaluated the effect of fiber inserts combined with composite resins on enamel and dentin margin microleakage. The fiber inserts were used with high- (x-tra fil) and low-viscosity (x-tra base) bulk-fill composite resins and as well as conventional composite resins (Grandio and Grandio Flow). In 96 sound, recently extracted molars, 2 standardized Class II cavities were prepared. The teeth were randomly divided into 8 groups of 12 teeth each, based on composite resin type and presence or absence of fiber inserts: groups 1 and 2, x-tra fil with and without fiber inserts, respectively; groups 3 and 4, x-tra base with and without fiber inserts; groups 5 and 6, Grandio with and without fiber inserts; and groups 7 and 8, Grandio Flow liner (gingival floor)/Grandio (remainder of cavity) with and without fiber inserts. In all the groups, a 2-step etch-and-rinse adhesive was used. The specimens were processed in a dye penetration technique to determine microleakage percentages. Data were analyzed with analysis of variance, Tukey, and t tests. There was significantly less leakage at the enamel margins than the dentin margins. Fiber reinforcement significantly decreased enamel microleakage in all the groups, with no significant differences among the groups. Concerning dentin microleakage, there were no significant differences among the 4 groups without fiber inserts, while a significant difference was detected in groups 2 (x-tra fil plus fiber) and 8 (Grandio Flow plus fiber/Grandio). Fibers significantly improved dentin sealing in groups 2 and 8. These findings suggest that a fiber insert reinforcing bulk-fill and conventional composite resins might improve enamel sealing in shallow Class II cavi-ties. The effect of fiber reinforcement on the dentin margins of deep cavities depended on the viscosity of the composite resins; fiber reinforcement was effective for flowable bulk-fill and conventional composite resin restorations.

  14. Bulk viscosity and cosmological evolution

    International Nuclear Information System (INIS)

    Beesham, A.

    1996-01-01

    In a recent interesting paper, Pimentel and Diaz-Rivera (Nuovo Cimento B, 109(1994) 1317) have derived several solutions with bulk viscosity in homogeneous and isotropic cosmological models. They also discussed the properties of these solutions. In this paper the authors relate the solutions of Pimentel and Diaz-Rivera by simple transformations to previous solutions published in the literature, showing that all the solutions can be derived from the known existing ones. Drawbacks to these approaches of studying bulk viscosity are pointed out, and better approaches indicated

  15. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  16. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  17. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  18. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  19. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  20. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  1. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  2. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  3. Magnetic resonance study of bulk and thin film EuTiO3

    International Nuclear Information System (INIS)

    Laguta, V V; Kamba, S; Maryško, M; Andrzejewski, B; Kachlík, M; Maca, K; Lee, J H; Schlom, D G

    2017-01-01

    Magnetic resonance spectra of EuTiO 3 in both bulk and thin film form were taken at temperatures from 3–350 K and microwave frequencies from 9.2–9.8 and 34 GHz. In the paramagnetic phase, magnetic resonance spectra are determined by magnetic dipole and exchange interactions between Eu 2+ spins. In the film, a large contribution arises from the demagnetization field. From detailed analysis of the linewidth and its temperature dependence, the parameters of spin–spin interactions were determined: the exchange frequency is 10.5 GHz and the estimated critical exponent of the spin correlation length is  ≈0.4. In the bulk samples, the spectra exhibited a distinct minimum in the linewidth at the Néel temperature, T N   ≈  5.5 K, while the resonance field practically does not change even on cooling below T N . This is indicative of a small magnetic anisotropy ∼320 G in the antiferromagnetic phase. In the film, the magnetic resonance spectrum is split below T N into several components due to excitation of the magnetostatic modes, corresponding to a non-uniform precession of magnetization. Moreover, the film was observed to degrade over two years. This was manifested by an increase of defects and a change in the domain structure. The saturated magnetization in the film, estimated from the magnetic resonance spectrum, was about 900 emu cm −3 or 5.5 µ B /unit cell at T   =  3.5 K. (paper)

  4. A CFT perspective on gravitational dressing and bulk locality

    Energy Technology Data Exchange (ETDEWEB)

    Lewkowycz, Aitor; Turiaci, Gustavo J. [Physics Department, Princeton University,Princeton, NJ 08544 (United States); Verlinde, Herman [Physics Department, Princeton University,Princeton, NJ 08544 (United States); Princeton Center for Theoretical Science, Princeton University,Princeton, NJ 08544 (United States)

    2017-01-02

    We revisit the construction of local bulk operators in AdS/CFT with special focus on gravitational dressing and its consequences for bulk locality. Specializing to 2+1-dimensions, we investigate these issues via the proposed identification between bulk operators and cross-cap boundary states. We obtain explicit expressions for correlation functions of bulk fields with boundary stress tensor insertions, and find that they are free of non-local branch cuts but do have non-local poles. We recover the HKLL recipe for restoring bulk locality for interacting fields as the outcome of a natural CFT crossing condition. We show that, in a suitable gauge, the cross-cap states solve the bulk wave equation for general background geometries, and satisfy a conformal Ward identity analogous to a soft graviton theorem. Virasoro symmetry, the large N conformal bootstrap and the uniformization theorem all play a key role in our derivations.

  5. Radiation-hardened bulk CMOS technology

    International Nuclear Information System (INIS)

    Dawes, W.R. Jr.; Habing, D.H.

    1979-01-01

    The evolutionary development of a radiation-hardened bulk CMOS technology is reviewed. The metal gate hardened CMOS status is summarized, including both radiation and reliability data. The development of a radiation-hardened bulk silicon gate process which was successfully implemented to a commercial microprocessor family and applied to a new, radiation-hardened, LSI standard cell family is also discussed. The cell family is reviewed and preliminary characterization data is presented. Finally, a brief comparison of the various radiation-hardened technologies with regard to performance, reliability, and availability is made

  6. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  7. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  8. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  9. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  10. Large-scale HTS bulks for magnetic application

    International Nuclear Information System (INIS)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN 2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN 2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved

  11. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: werfel@t-online.de [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)

    2013-01-15

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  12. 12 CFR 564.4 - Minimum appraisal standards.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...

  13. Bulk-edge correspondence in topological transport and pumping

    Science.gov (United States)

    Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro

    2018-03-01

    The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.

  14. The minimum wage in the Czech enterprises

    OpenAIRE

    Eva Lajtkepová

    2010-01-01

    Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...

  15. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  16. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  17. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  18. Minimum Wages and Regional Disparity: An analysis on the evolution of price-adjusted minimum wages and their effects on firm profitability (Japanese)

    OpenAIRE

    MORIKAWA Masayuki

    2013-01-01

    This paper, using prefecture level panel data, empirically analyzes 1) the recent evolution of price-adjusted regional minimum wages and 2) the effects of minimum wages on firm profitability. As a result of rapid increases in minimum wages in the metropolitan areas since 2007, the regional disparity of nominal minimum wages has been widening. However, the disparity of price-adjusted minimum wages has been shrinking. According to the analysis of the effects of minimum wages on profitability us...

  19. 29 CFR 794.131 - “Customer * * * engaged in bulk distribution”.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false âCustomer * * * engaged in bulk distributionâ. 794.131... Sales Made to Other Bulk Distributors § 794.131 “Customer * * * engaged in bulk distribution”. A sale to a customer of an enterprise engaged in the wholesale or bulk distribution of petroleum products will...

  20. Magnetic levitation systems using a high-Tc superconducting bulk magnet

    Energy Technology Data Exchange (ETDEWEB)

    Ohsaki, Hiroyuki [Dept. of Electrical Engineering, Univ. of Tokyo (Japan); Kitahara, Hirotaka [Dept. of Electrical Engineering, Univ. of Tokyo (Japan); Masada, Eisuke [Dept. of Electrical Engineering, Univ. of Tokyo (Japan)

    1996-12-31

    Recent development of high-performance high-Tc bulk superconductors is making their application for electromagnetic force use feasible. We have studied electromagnetic levitation systems using high-Tc bulk superconducting material. In this paper, after an overview of superconducting magnetic levitation systems, with an emphasis on high-Tc bulk superconductor applications, experimental results of a high-Tc bulk EMS levitation and FEM analysis results of magnetic gradient levitation using bulk superconductor are described. Problems to be solved for their application are also discussed. (orig.)

  1. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  2. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  3. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  4. 41 CFR 50-201.1101 - Minimum wages.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...

  5. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  6. Bulk density calculations from prompt gamma ray yield

    International Nuclear Information System (INIS)

    Naqvi, A.A.; Nagadi, M.M.; Al-Amoudi, O.S.B.; Maslehuddin, M.

    2006-01-01

    Full text: The gamma ray yield from a Prompt Gamma ray Neutron Activation Analysis (PGNAA) setup is a linear function of element concentration and neutron flux in a the sample with constant bulk density. If the sample bulk density varies as well, then the element concentration and the neutron flux has a nonlinear correlation with the gamma ray yield [1]. The measurement of gamma ray yield non-linearity from samples and a standard can be used to estimate the bulk density of the samples. In this study the prompt gamma ray yield from Blast Furnace Slag, Fly Ash, Silica Fumes and Superpozz cements samples have been measured as a function of their calcium and silicon concentration using KFUPM accelerator-based PGNAA setup [2]. Due to different bulk densities of the blended cement samples, the measured gamma ray yields have nonlinear correlation with calcium and silicon concentration of the samples. The non-linearity in the yield was observed to increase with gamma rays energy and element concentration. The bulk densities of the cement samples were calculated from ratio of gamma ray yield from blended cement and that from a Portland cement standard. The calculated bulk densities have good agreement with the published data. The result of this study will be presented

  7. Meta-omic signatures of microbial metal and nitrogen cycling in marine oxygen minimum zones

    Directory of Open Access Journals (Sweden)

    Jennifer B. Glass

    2015-09-01

    Full Text Available Iron (Fe and copper (Cu are essential cofactors for microbial metalloenzymes, but little is known about the metalloenyzme inventory of anaerobic marine microbial communities despite their importance to the nitrogen cycle. We compared dissolved O2, NO3-, NO2-, Fe and Cu concentrations with nucleic acid sequences encoding Fe and Cu-binding proteins in 21 metagenomes and 9 metatranscriptomes from Eastern Tropical North and South Pacific oxygen minimum zones and 7 metagenomes from the Bermuda Atlantic Time-series Station. Dissolved Fe concentrations increased sharply at upper oxic-anoxic transition zones, with the highest Fe:Cu molar ratio (1.8 occurring at the anoxic core of the Eastern Tropical North Pacific oxygen minimum zone and matching the predicted maximum ratio based on data from diverse ocean sites. The relative abundance of genes encoding Fe-binding proteins was negatively correlated with O2, driven by significant increases in genes encoding Fe-proteins involved in dissimilatory nitrogen metabolisms under anoxia. Transcripts encoding cytochrome c oxidase, the Fe- and Cu-containing terminal reductase in aerobic respiration, were positively correlated with O2 content. A comparison of the taxonomy of genes encoding Fe- and Cu-binding vs. bulk proteins in OMZs revealed that Planctomycetes represented a higher percentage of Fe genes while Thaumarchaeota represented a higher percentage of Cu genes, particularly at oxyclines. These results are broadly consistent with higher relative abundance of genes encoding Fe-proteins in the genome of a marine planctomycete vs. higher relative abundance of genes encoding Cu-proteins in the genome of a marine thaumarchaeote. These findings highlight the importance of metalloenzymes for microbial processes in oxygen minimum zones and suggest preferential Cu use in oxic habitats with Cu > Fe vs. preferential Fe use in anoxic niches with Fe > Cu.

  8. Application of Steenbeck's minimum principle for three-dimensional modelling of DC arc plasma torches

    International Nuclear Information System (INIS)

    Li Heping; Pfender, E; Chen, Xi

    2003-01-01

    In this paper, physical/mathematical models for the three-dimensional, quasi-steady modelling of the plasma flow and heat transfer inside a non-transferred DC arc plasma torch are described in detail. The Steenbeck's minimum principle (Finkelnburg W and Maecker H 1956 Electric arcs and thermal plasmas Encyclopedia of Physics vol XXII (Berlin: Springer)) is employed to determine the axial position of the anode arc-root at the anode surface. This principle postulates a minimum arc voltage for a given arc current, working gas flow rate, and torch configuration. The modelling results show that the temperature and flow fields inside the DC non-transferred arc plasma torch show significant three-dimensional features. The predicted anode arc-root attachment position and the arc shape by employing Steenbeck's minimum principle are reasonably consistent with experimental observations. The thermal efficiency and the torch power distribution are also calculated in this paper. The results show that the thermal efficiency of the torch always ranges from 30% to 45%, i.e. more than half of the total power input is taken away by the cathode and anode cooling water. The special heat transfer mechanisms at the plasma-anode interface, such as electron condensation, electron enthalpy and radiative heat transfer from the bulk plasma to the anode inner surface, are taken into account in this paper. The calculated results show that besides convective heat transfer, the contributions of electron condensation, electron enthalpy and radiation to the anode heat transfer are also important (∼30% for parameter range of interest in this paper). Additional effects, such as the non-local thermodynamic equilibrium plasma state near the electrodes, the transient phenomena, etc, need to be considered in future physical/mathematical models, including corresponding measurements

  9. Minimum Wage Laws and the Distribution of Employment.

    Science.gov (United States)

    Lang, Kevin

    The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…

  10. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  11. 29 CFR 505.3 - Prevailing minimum compensation.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to the...

  12. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  13. 7 CFR 58.313 - Print and bulk packaging rooms.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Print and bulk packaging rooms. 58.313 Section 58.313 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards....313 Print and bulk packaging rooms. Rooms used for packaging print or bulk butter and related products...

  14. Force measurements for levitated bulk superconductors

    International Nuclear Information System (INIS)

    Tachi, Y.; Sawa, K.; Iwasa, Y.; Nagashima, K.; Otani, T.; Miyamoto, T.; Tomita, M.; Murakami, M.

    2000-01-01

    We have developed a force measurement system which enables us to directly measure the levitation force of levitated bulk superconductors. Experimental data of the levitation forces were compared with the results of numerical simulation based on the levitation model that we deduced in our previous paper. They were in fairly good agreement, which confirms that our levitation model can be applied to the force analyses for levitated bulk superconductors. (author)

  15. Force measurements for levitated bulk superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Tachi, Y. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan). E-mail: tachi at istec.or.jp; Uemura, N. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan); Sawa, K. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); Iwasa, Y. [Francis Bitter Magnet Laboratory, Massachusetts Institute of Technology, Cambridge, MA (United States); Nagashima, K. [Railway Technical Research Institute, Hikari-cho, Kokubunji-shi, Tokyo (Japan); Otani, T.; Miyamoto, T.; Tomita, M.; Murakami, M. [ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan)

    2000-06-01

    We have developed a force measurement system which enables us to directly measure the levitation force of levitated bulk superconductors. Experimental data of the levitation forces were compared with the results of numerical simulation based on the levitation model that we deduced in our previous paper. They were in fairly good agreement, which confirms that our levitation model can be applied to the force analyses for levitated bulk superconductors. (author)

  16. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  17. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  18. Comparing Variance/Covariance and Historical Simulation in the Context of the Financial Crisis – Do Extreme Movements Have an Influence onto Portfolio Selection?

    Directory of Open Access Journals (Sweden)

    Svend Reuse

    2010-09-01

    Full Text Available Portfolio theory and the basic ideas of Markowitz have been extended in the recent past by alternative risk models as historical simulation or even copula functions. The central question of this paper is if these approaches lead to different results compared to the classical variance/covariance approach. Therefore, empirical data of the last 10 years is analysed. Both approaches are compared in the special context of the financial crisis. The worst case optimization and the Value at Risk (VaR are defined in order to define the minimum risk portfolio before and after the financial crisis. The result is that the financial crisis has nearly no impact onto the portfolio, but the two approaches lead to different results.

  19. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  20. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  1. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  2. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  3. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  4. A stereoscopic look into the bulk

    Energy Technology Data Exchange (ETDEWEB)

    Czech, Bartłomiej; Lamprou, Lampros; McCandlish, Samuel; Mosk, Benjamin [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,Stanford, CA 94305 (United States); Sully, James [Theory Group, SLAC National Accelerator LaboratoryMenlo Park, CA 94025 (United States)

    2016-07-26

    We present the foundation for a holographic dictionary with depth perception. The dictionary consists of natural CFT operators whose duals are simple, diffeomorphism-invariant bulk operators. The CFT operators of interest are the “OPE blocks,” contributions to the OPE from a single conformal family. In holographic theories, we show that the OPE blocks are dual at leading order in 1/N to integrals of effective bulk fields along geodesics or homogeneous minimal surfaces in anti-de Sitter space. One widely studied example of an OPE block is the modular Hamiltonian, which is dual to the fluctuation in the area of a minimal surface. Thus, our operators pave the way for generalizing the Ryu-Takayanagi relation to other bulk fields. Although the OPE blocks are non-local operators in the CFT, they admit a simple geometric description as fields in kinematic space — the space of pairs of CFT points. We develop the tools for constructing local bulk operators in terms of these non-local objects. The OPE blocks also allow for conceptually clean and technically simple derivations of many results known in the literature, including linearized Einstein’s equations and the relation between conformal blocks and geodesic Witten diagrams.

  5. Do Some Workers Have Minimum Wage Careers?

    Science.gov (United States)

    Carrington, William J.; Fallick, Bruce C.

    2001-01-01

    Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…

  6. Does the Minimum Wage Affect Welfare Caseloads?

    Science.gov (United States)

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  7. 29 CFR 4.159 - General minimum wage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...

  8. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  9. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  10. Holographic bulk reconstruction with α' corrections

    Science.gov (United States)

    Roy, Shubho R.; Sarkar, Debajyoti

    2017-10-01

    We outline a holographic recipe to reconstruct α' corrections to anti-de Sitter (AdS) (quantum) gravity from an underlying CFT in the strictly planar limit (N →∞ ). Assuming that the boundary CFT can be solved in principle to all orders of the 't Hooft coupling λ , for scalar primary operators, the λ-1 expansion of the conformal dimensions can be mapped to higher curvature corrections of the dual bulk scalar field action. Furthermore, for the metric perturbations in the bulk, the AdS /CFT operator-field isomorphism forces these corrections to be of the Lovelock type. We demonstrate this by reconstructing the coefficient of the leading Lovelock correction, also known as the Gauss-Bonnet term in a bulk AdS gravity action using the expression of stress-tensor two-point function up to subleading order in λ-1.

  11. 19 CFR 151.24 - Unlading facilities for bulk sugar.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Unlading facilities for bulk sugar. 151.24 Section... OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Sugars, Sirups, and Molasses § 151.24 Unlading facilities for bulk sugar. When dutiable sugar is to be imported in bulk, a full...

  12. Enhancement of surface magnetism due to bulk bond dilution

    International Nuclear Information System (INIS)

    Tsallis, C.; Sarmento, E.F.; Albuquerque, E.L. de

    1985-01-01

    Within a renormalization group scheme, the phase diagram of a semi-infinite simple cubic Ising ferromagnet is discussed, with arbitrary surface and bulk coupling constants, and including possible dilution of the bulk bonds. It is obtained that dilution makes easier the appearance of surface magnetism in the absence of bulk magnetism. (Author) [pt

  13. Renormalization group approach to causal bulk viscous cosmological models

    International Nuclear Information System (INIS)

    Belinchon, J A; Harko, T; Mak, M K

    2002-01-01

    The renormalization group method is applied to the study of homogeneous and flat Friedmann-Robertson-Walker type universes, filled with a causal bulk viscous cosmological fluid. The starting point of the study is the consideration of the scaling properties of the gravitational field equations, the causal evolution equation of the bulk viscous pressure and the equations of state. The requirement of scale invariance imposes strong constraints on the temporal evolution of the bulk viscosity coefficient, temperature and relaxation time, thus leading to the possibility of obtaining the bulk viscosity coefficient-energy density dependence. For a cosmological model with bulk viscosity coefficient proportional to the Hubble parameter, we perform the analysis of the renormalization group flow around the scale-invariant fixed point, thereby obtaining the long-time behaviour of the scale factor

  14. Spectroscopic and Mechanical Properties of a New Generation of Bulk Fill Composites.

    Science.gov (United States)

    Monterubbianesi, Riccardo; Orsini, Giovanna; Tosi, Giorgio; Conti, Carla; Librando, Vito; Procaccini, Maurizio; Putignano, Angelo

    2016-01-01

    Objectives: The aims of this study were to in vitro evaluate the degree of conversion and the microhardness properties of five bulk fill resin composites; in addition, the performance of two curing lamps, used for composites polymerization, was also analyzed. Materials and Methods: The following five resin-based bulk fill composites were tested: SureFil SDR®, Fill Up!™, Filtek™, SonicFill™, and SonicFill2™. Samples of 4 mm in thickness were prepared using Teflon molds filled in one increment and light-polymerized using two LED power units. Ten samples for each composite were cured using Elipar S10 and 10 using Demi Ultra. Additional samples of SonicFill2, (3 and 5 mm-thick) were also tested. The degree of conversion (DC) was determined by Raman spectroscopy, while the Vickers microhardness (VMH) was evaluated using a microhardness tester. The experimental evaluation was carried out on top and bottom sides, immediately after curing (t0), and, on bottom, after 24 h (t24). Two-ways analysis of variance was applied to evaluate DC and VMH-values. In all analyses, the level of significance was set at p composites recorded satisfactory DCs on top and bottom sides. At t0, the top of SDR and SonicFill2 showed the highest DCs-values (85.56 ± 9.52 and 85.47 ± 1.90, respectively), when cured using Elipar S10; using Demi Ultra, SonicFill2 showed the highest DCs-values (90.53 ± 2.18). At t0, the highest DCs-values of bottom sides were recorded by SDR (84.64 ± 11.68), when cured using Elipar S10, and Filtek (81.52 ± 4.14), using Demi Ultra. On top sides, Demi Ultra lamp showed significant higher DCs compared to the Elipar S10 ( p composites showed higher VMH than the flowable or dual curing composites.

  15. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  16. Development of superconductor bulk for superconductor bearing

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Joong; Jun, Byung Hyuk; Park, Soon Dong (and others)

    2008-08-15

    Current carrying capacity is one of the most important issues in the consideration of superconductor bulk materials for engineering applications. There are numerous applications of Y-Ba-Cu-O (YBCO) bulk superconductors e.g. magnetic levitation train, flywheel energy storage system, levitation transportation, lunar telescope, centrifugal device, magnetic shielding materials, bulk magnets etc. Accordingly, to obtain YBCO materials in the form of large, single crystals without weak-link problem is necessary. A top seeded melt growth (TSMG) process was used to fabricate single crystal YBCO bulk superconductors. The seeded and infiltration growth (IG) technique was also very promising method for the synthesis of large, single-grain YBCO bulk superconductors with good superconducting properties. 5 wt.% Ag doped Y211 green compacts were sintered at 900 .deg. C {approx} 1200 .deg.C and then a single crystal YBCO was fabricated by an infiltration method. A refinement and uniform distribution of the Y211 particles in the Y123 matrix were achieved by sintering the Ag-doped samples. This enhancement of the critical current density was ascribable to a fine dispersion of the Y211 particles, a low porosity and the presence of Ag particles. In addition, we have designed and manufactured large YBCO single domain with levitation force of 10-13 kg/cm{sup 2} using TSMG processing technique.

  17. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  18. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  19. Use of containers to carry bulk and break bulk commodities and its impact on gulf region ports and international trade.

    Science.gov (United States)

    2014-08-01

    The University of New Orleans Transportation Institute was tasked by the Louisiana Transportation Research Center (LTRC) in mid-2012 to assess the use of containers to transport bulk and break bulk commodities and to determine what their impact would...

  20. Materials processing and machine applications of bulk HTS

    Science.gov (United States)

    Miki, M.; Felder, B.; Tsuzuki, K.; Xu, Y.; Deng, Z.; Izumi, M.; Hayakawa, H.; Morita, M.; Teshima, H.

    2010-12-01

    We report a refrigeration system for rotating machines associated with the enhancement of the trapped magnetic flux of bulk high-temperature superconductor (HTS) field poles. A novel cryogenic system was designed and fabricated. It is composed of a low-loss rotary joint connecting the rotor and a closed-cycle thermosiphon under a GM cryocooler using a refrigerant. Condensed neon gas was adopted as a suitable cryogen for the operation of HTS rotating machines with field poles composed of RE-Ba-Cu-O family materials, where RE is a rare-earth metal. Regarding the materials processing of the bulks HTS, thanks to the addition of magnetic particles to GdBa2Cu3O7 - d (Gd123) bulk superconductors an increase of more than 20% in the trapped magnetic flux density was achieved at liquid nitrogen temperature. Field-pole Gd123 bulks up to 46 mm in diameter were synthesized with the addition of Fe-B alloy magnetic particles and assembled into the synchronous machine rotor to be tested. Successful cooling of the magnetized rotor field poles down to 35 K and low-output-power rotating operation was achieved up to 720 rpm in the test machine with eight field-pole bulks. The present results show a substantial basis for making a prototype system of rotating machinery of applied HTS bulks.

  1. Materials processing and machine applications of bulk HTS

    Energy Technology Data Exchange (ETDEWEB)

    Miki, M; Felder, B; Tsuzuki, K; Xu, Y; Deng, Z; Izumi, M [Department of Marine Electronics and Mechanical Engineering, Tokyo University of Marine Science and Technology, 2-1-6, Etchu-jima, Koto-ku, Tokyo 135-8533 (Japan); Hayakawa, H [Kitano Seiki Co. Ltd, 7-17-3, Chuo, Ohta-ku, Tokyo 143-0024 (Japan); Morita, M; Teshima, H, E-mail: d082025@kaiyodai.ac.j [Nippon Steel Co. Ltd, 20-1, Shintomi, Huttsu-shi, Chiba 293-8511 (Japan)

    2010-12-15

    We report a refrigeration system for rotating machines associated with the enhancement of the trapped magnetic flux of bulk high-temperature superconductor (HTS) field poles. A novel cryogenic system was designed and fabricated. It is composed of a low-loss rotary joint connecting the rotor and a closed-cycle thermosiphon under a GM cryocooler using a refrigerant. Condensed neon gas was adopted as a suitable cryogen for the operation of HTS rotating machines with field poles composed of RE-Ba-Cu-O family materials, where RE is a rare-earth metal. Regarding the materials processing of the bulks HTS, thanks to the addition of magnetic particles to GdBa{sub 2}Cu{sub 3}O{sub 7-d} (Gd123) bulk superconductors an increase of more than 20% in the trapped magnetic flux density was achieved at liquid nitrogen temperature. Field-pole Gd123 bulks up to 46 mm in diameter were synthesized with the addition of Fe-B alloy magnetic particles and assembled into the synchronous machine rotor to be tested. Successful cooling of the magnetized rotor field poles down to 35 K and low-output-power rotating operation was achieved up to 720 rpm in the test machine with eight field-pole bulks. The present results show a substantial basis for making a prototype system of rotating machinery of applied HTS bulks.

  2. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  3. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Addai, Emmanuel Kwasi, E-mail: emmanueladdai41@yahoo.com; Gabel, Dieter; Krause, Ulrich

    2016-04-15

    Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.

  4. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures

    International Nuclear Information System (INIS)

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-01-01

    Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.

  5. Microtensile bond strength of bulk-fill restorative composites to dentin.

    Science.gov (United States)

    Mandava, Jyothi; Vegesna, Divya-Prasanna; Ravi, Ravichandra; Boddeda, Mohan-Rao; Uppalapati, Lakshman-Varma; Ghazanfaruddin, M D

    2017-08-01

    To facilitate the easier placement of direct resin composite in deeper cavities, bulk fill composites have been introduced. The Mechanical stability of fillings in stress bearing areas restored with bulk-fill resin composites is still open to question, since long term clinical studies are not available so far. Thus, the objective of the study was to evaluate and compare the microtensile bond strength of three bulk-fill restorative composites with a nanohybrid composite. Class I cavities were prepared on sixty extracted mandibular molars. Teeth were divided into 4 groups (n= 15 each) and in group I, the prepared cavities were restored with nanohybrid (Filtek Z250 XT) restorative composite in an incremental manner. In group II, III and IV, the bulk-fill composites (Filtek, Tetric EvoCeram, X-tra fil bulk-fill restoratives) were placed as a 4 mm single increment and light cured. The restored teeth were subjected to thermocycling and bond strength testing was done using instron testing machine. The mode of failure was assessed by scanning electron microscope (SEM). The bond strength values obtained in megapascals (MPa) were subjected to statistical analysis, using SPSS/PC version 20 software.One-way ANOVA was used for groupwise comparison of the bond strength. Tukey's Post Hoc test was used for pairwise comparisons among the groups. The highest mean bond strength was achieved with Filtek bulk-fill restorative showing statistically significant difference with Tetric EvoCeram bulk-fill ( p composites. Adhesive failures are mostly observed with X-tra fil bulk fill composites, whereas mixed failures are more common with other bulk fill composites. Bulk-fill composites exhibited adequate bond strength to dentin and can be considered as restorative material of choice in posterior stress bearing areas. Key words: Bond strength, Bulk-fill restoratives, Configuration factor, Polymerization shrinkage.

  6. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  7. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  8. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  9. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  10. New Minimum Wage Research: A Symposium.

    Science.gov (United States)

    Ehrenberg, Ronald G.; And Others

    1992-01-01

    Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…

  11. Teaching the Minimum Wage in Econ 101 in Light of the New Economics of the Minimum Wage.

    Science.gov (United States)

    Krueger, Alan B.

    2001-01-01

    Argues that the recent controversy over the effect of the minimum wage on employment offers an opportunity for teaching introductory economics. Examines eight textbooks to determine topic coverage but finds little consensus. Describes how minimum wage effects should be taught. (RLH)

  12. Improved magnetic-field homogeneity of NMR HTS bulk magnet using a new stacking structure and insertion of an HTS film cylinder into a bulk bore

    International Nuclear Information System (INIS)

    Itoh, Yoshitaka; Yanagi, Yousuke; Nakamura, Takashi

    2017-01-01

    A new type of superconducting bulk magnet for compact nuclear magnetic resonance (NMR) devices with high magnetic-field homogeneity has been developed by inserting an HTS film cylinder into a bulk superconductor bore. Annular 60 mmϕ Eu-Ba-Cu-O bulk superconductors with a larger inner diameter (ID) of 36 mm were sandwiched between bulk superconductors with a smaller ID of 28 mm, and the total height of the bulk superconductor set was made to be 120 mm. The inner height of central wide bore space was optimized by magnetic-field simulation so that the influence of the bulk superconductor's paramagnetic moment on applied field homogeneity was minimized during the magnetization process. An HTS film cylinder, in which Gd-Ba-Cu-O tapes were wound helically in three layers around a copper cylinder, was inserted into the bulk bore in order to compensate for the inhomogeneous field trapped by the bulk superconductor. The superconducting bulk magnet composed of the above bulk superconductor set and the film cylinder were cooled by a GM pulse tube refrigerator and magnetized at 4.747 T using the field cooling (FC) method and a conventional superconducting coil magnet adjusted to below 0.5 ppm in magnetic-field homogeneity. The NMR measurement was conducted for an H_2O sample with a diameter of 6.9 mm and a length of 10 mm by setting the sample in the center of the 20 mm ID room-temperature bore of the bulk magnet. The magnetic-field homogeneity derived from the full width at half maximum (FWHM) of the "1H spectrum of H_2O was 0.45 ppm. We confirmed that the HTS film inner cylinder was effective in maintaining the homogeneity of the magnetic field applied in the magnetization process, and as a result, a magnetic field with a homogeneity of less than 1 ppm can be generated in the bore of the bulk magnet without using shim coils. (author)

  13. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  14. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  15. Unit-of-Use Versus Traditional Bulk Packaging

    Directory of Open Access Journals (Sweden)

    Tiffany So

    2012-01-01

    Full Text Available Background: The choice between unit-of-use versus traditional bulk packaging in the US has long been a continuous debate for drug manufacturers and pharmacies in order to have the most efficient and safest practices. Understanding the benefits of using unit-of-use packaging over bulk packaging by US drug manufacturers in terms of workflow efficiency, economical costs and medication safety in the pharmacy is sometimes challenging.Methods: A time-saving study comparing the time saved using unit-of-use packaging versus bulk packaging, was examined. Prices between unit-of-use versus bulk packages were compared by using the Red Book: Pharmacy’s Fundamental Reference. Other articles were reviewed on the topics of counterfeiting, safe labeling, and implementation of unit-of-use packaging. Lastly, a cost-saving study was reviewed showing how medication adherence, due to improved packaging, could be cost-effective for patients.Results: When examining time, costs, medication adherence, and counterfeiting arguments, unit-of-use packaging proved to be beneficial for patients in all these terms.

  16. Unit-of-Use Versus Traditional Bulk Packaging

    Directory of Open Access Journals (Sweden)

    Tiffany So

    2012-01-01

    Full Text Available Background: The choice between unit-of-use versus traditional bulk packaging in the US has long been a continuous debate for drug manufacturers and pharmacies in order to have the most efficient and safest practices. Understanding the benefits of using unit-of-use packaging over bulk packaging by US drug manufacturers in terms of workflow efficiency, economical costs and medication safety in the pharmacy is sometimes challenging. Methods: A time-saving study comparing the time saved using unit-of-use packaging versus bulk packaging, was examined. Prices between unit-of-use versus bulk packages were compared by using the Red Book: Pharmacy's Fundamental Reference. Other articles were reviewed on the topics of counterfeiting, safe labeling, and implementation of unit-of-use packaging. Lastly, a cost-saving study was reviewed showing how medication adherence, due to improved packaging, could be cost-effective for patients. Results: When examining time, costs, medication adherence, and counterfeiting arguments, unit-of-use packaging proved to be beneficial for patients in all these terms.   Type: Student Project

  17. Bulk viscosity in 2SC quark matter

    International Nuclear Information System (INIS)

    Alford, Mark G; Schmitt, Andreas

    2007-01-01

    The bulk viscosity of three-flavour colour-superconducting quark matter originating from the nonleptonic process u + s ↔ u + d is computed. It is assumed that up and down quarks form Cooper pairs while the strange quark remains unpaired (2SC phase). A general derivation of the rate of strangeness production is presented, involving contributions from a multitude of different subprocesses, including subprocesses that involve different numbers of gapped quarks as well as creation and annihilation of particles in the condensate. The rate is then used to compute the bulk viscosity as a function of the temperature, for an external oscillation frequency typical of a compact star r-mode. We find that, for temperatures far below the critical temperature T c for 2SC pairing, the bulk viscosity of colour-superconducting quark matter is suppressed relative to that of unpaired quark matter, but for T ∼> T c /30 the colour-superconducting quark matter has a higher bulk viscosity. This is potentially relevant for the suppression of r-mode instabilities early in the life of a compact star

  18. Hexaferrite multiferroics: from bulk to thick films

    Science.gov (United States)

    Koutzarova, T.; Ghelev, Ch; Peneva, P.; Georgieva, B.; Kolev, S.; Vertruyen, B.; Closset, R.

    2018-03-01

    We report studies of the structural and microstructural properties of Sr3Co2Fe24O41 in bulk form and as thick films. The precursor powders for the bulk form were prepared following the sol-gel auto-combustion method. The prepared pellets were synthesized at 1200 °C to produce Sr3Co2Fe24O41. The XRD spectra of the bulks showed the characteristic peaks corresponding to the Z-type hexaferrite structure as a main phase and second phases of CoFe2O4 and Sr3Fe2O7-x. The microstructure analysis of the cross-section of the bulk pellets revealed a hexagonal sheet structure. Large areas were observed of packages of hexagonal sheets where the separate hexagonal particles were ordered along the c axis. Sr3Co2Fe24O41 thick films were deposited from a suspension containing the Sr3Co2Fe24O41 powder. The microstructural analysis of the thick films showed that the particles had the perfect hexagonal shape typical for hexaferrites.

  19. 30 CFR 75.1431 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ..., including rotation resistant). For rope lengths less than 3,000 feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet...

  20. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  1. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  2. Quantifying Dustiness, Specific Allergens, and Endotoxin in Bulk Soya Imports

    Directory of Open Access Journals (Sweden)

    Howard J. Mason

    2017-11-01

    Full Text Available Soya is an important bulk agricultural product often transported by sea as chipped beans and/or the bean husks after pelletisation. There are proven allergens in both forms. Bulk handling of soya imports can generate air pollution containing dust, allergens, and pyrogens, posing health risks to dockside workers and surrounding populations. Using an International Organization for Standardization (ISO standardised rotating drum dustiness test in seven imported soya bulks, we compared the generated levels of dust and two major soya allergens in three particle sizes related to respiratory health. Extractable levels of allergen and endotoxin from the bulks showed 30–60 fold differences, with levels of one allergen (hydrophobic seed protein and endotoxin higher in husk. The generated levels of dust and allergens in the three particle sizes also showed very wide variations between bulks, with aerolysed levels of allergen influenced by both the inherent dustiness and the extractable allergen in each bulk. Percentage allergen aerolysed from pelletized husk—often assumed to be of low dustiness—after transportation was not lower than that from chipped beans. Thus, not all soya bulks pose the same inhalation health risk and reinforces the importance of controlling dust generation from handling all soya bulk to as low as reasonably practicable.

  3. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  4. Theory of bulk-surface coupling in topological insulator films

    Science.gov (United States)

    Saha, Kush; Garate, Ion

    2014-12-01

    We present a quantitative microscopic theory of the disorder- and phonon-induced coupling between surface and bulk states in doped topological insulator films. We find a simple mathematical structure for the surface-to-bulk scattering matrix elements and confirm the importance of bulk-surface coupling in transport and photoemission experiments, assessing its dependence on temperature, carrier density, film thickness, and particle-hole asymmetry.

  5. Surface conduction of topological Dirac electrons in bulk insulating Bi2Se3

    Science.gov (United States)

    Fuhrer, Michael

    2013-03-01

    The three dimensional strong topological insulator (STI) is a new phase of electronic matter which is distinct from ordinary insulators in that it supports on its surface a conducting two-dimensional surface state whose existence is guaranteed by topology. I will discuss experiments on the STI material Bi2Se3, which has a bulk bandgap of 300 meV, much greater than room temperature, and a single topological surface state with a massless Dirac dispersion. Field effect transistors consisting of thin (3-20 nm) Bi2Se3 are fabricated from mechanically exfoliated from single crystals, and electrochemical and/or chemical gating methods are used to move the Fermi energy into the bulk bandgap, revealing the ambipolar gapless nature of transport in the Bi2Se3 surface states. The minimum conductivity of the topological surface state is understood within the self-consistent theory of Dirac electrons in the presence of charged impurities. The intrinsic finite-temperature resistivity of the topological surface state due to electron-acoustic phonon scattering is measured to be ~60 times larger than that of graphene largely due to the smaller Fermi and sound velocities in Bi2Se3, which will have implications for topological electronic devices operating at room temperature. As samples are made thinner, coherent coupling of the top and bottom topological surfaces is observed through the magnitude of the weak anti-localization correction to the conductivity, and, in the thinnest Bi2Se3 samples (~ 3 nm), in thermally-activated conductivity reflecting the opening of a bandgap.

  6. Fatigue and corrosion of a Pd-based bulk metallic glass in various environments

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, L.Y. [East Los Angeles College, Monterey Park, CA 91754 (United States); Roberts, S.N. [Keck Laboratory of Materials Science, California Institute of Technology, Pasadena, CA 91125 (United States); Baca, N. [Department of Chemistry and Biochemistry, California State University Northridge, Northridge, CA 91330 (United States); Wiest, A. [Naval Surface Warfare Center, Norco, CA (United States); Garrett, S.J. [Department of Chemistry and Biochemistry, California State University Northridge, Northridge, CA 91330 (United States); Conner, R.D., E-mail: rdconner@csun.edu [Department of Manufacturing Systems Engineering and Management, California State University Northridge, 18111 Nordhoff St., Mail Code 8295, Northridge, CA 91330 (United States)

    2013-10-15

    Bulk metallic glasses (BMGs) possess attractive properties for biomedical applications, including high strength, hardness and corrosion resistance, and low elastic modulus. In this study, we conduct rotating beam fatigue tests on Pd{sub 43}Ni{sub 10}Cu{sub 27}P{sub 20} bulk metallic glass in air and Eagle's medium (EM) and measure the corrosive resistance of the alloy by submersion in acidic and basic electrolytes. Fatigue results are compared to those of commonly used biometals in EM. Rotating beam fatigue tests conducted in air and in Eagle's medium show no deterioration in fatigue properties in this potentially corrosive environment out to 10{sup 7} cycles. A specimen size effect is revealed when comparing fatigue results to those of a similar alloy of larger minimum dimensions. Corrosion tests show that the alloy is not affected by highly basic (NaOH) or saline (NaCl) solutions, nor in EM, and is affected by chlorinated acidic solutions (HCl) to a lesser extent than other commonly used biometals. Corrosion in HCl initiates with selective leaching of late transition metals, followed by dissolution of Pd. - Highlights: • Fatigue limit of 600 MPa with no deterioration when exposed to Eagle's medium. • Fatigue shows sample size effect. • Pd-based BMG is unaffected by saline or strong basic solutions. • Pd-based BMG is substantially more resistant to chlorinated acids than CoCrMo, 316 L Stainless, or Ti6Al4V alloys. • Corrosion shows selective leaching of late transition metals, followed by Pd and P.

  7. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    International Nuclear Information System (INIS)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs

  8. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    Energy Technology Data Exchange (ETDEWEB)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs.

  9. Surface barrier and bulk pinning in MgB$_2$ superconductor

    OpenAIRE

    Pissas, M.; Moraitakis, E.; Stamopoulos, D.; Papavassiliou, G.; Psycharis, V.; Koutandos, S.

    2001-01-01

    We present a modified method of preparation of the new superconductor MgB$_2$. The polycrystalline samples were characterized using x-ray and magnetic measurements. The surface barriers control the isothermal magnetization loops in powder samples. In bulk as prepared samples we always observed symmetric magnetization loops indicative of the presence of a bulk pinning mechanism. Magnetic relaxation measurements in the bulk sample reveal a crossover of surface barrier to bulk pinning.

  10. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  11. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  12. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  13. Analysis of force variance for a continuous miner drum using the Design of Experiments method

    Energy Technology Data Exchange (ETDEWEB)

    S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)

    2006-06-15

    Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.

  14. Longitudinal and bulk viscosities of Lennard-Jones fluids

    Science.gov (United States)

    Tankeshwar, K.; Pathak, K. N.; Ranganathan, S.

    1996-12-01

    Expressions for the longitudinal and bulk viscosities have been derived using Green Kubo formulae involving the time integral of the longitudinal and bulk stress autocorrelation functions. The time evolution of stress autocorrelation functions are determined using the Mori formalism and a memory function which is obtained from the Mori equation of motion. The memory function is of hyperbolic secant form and involves two parameters which are related to the microscopic sum rules of the respective autocorrelation function. We have derived expressions for the zeroth-, second-and fourth- order sum rules of the longitudinal and bulk stress autocorrelation functions. These involve static correlation functions up to four particles. The final expressions for these have been put in a form suitable for numerical calculations using low- order decoupling approximations. The numerical results have been obtained for the sum rules of longitudinal and bulk stress autocorrelation functions. These have been used to calculate the longitudinal and bulk viscosities and time evolution of the longitudinal stress autocorrelation function of the Lennard-Jones fluids over wide ranges of densities and temperatures. We have compared our results with the available computer simulation data and found reasonable agreement.

  15. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  16. Feed chute geometry for minimum belt wear

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, A W; Wiche, S J [University of Newcastle, Newcastle, NSW (Australia). Centre for Bulk Solids and Particulate Technologies

    1998-09-01

    The paper is concerned with the feeding and transfer of bulk solids in conveyor belt operation. The paper focuses on chute design where the objective is to prevent spillage and minimise both chute and belt wear. It is shown that these objectives may be met through correct dynamic design of the chute and by directing the flow of bulk solids onto the belt at an acceptable incidence angle. The aim is to match the tangential velocity component of the feed velocity as close as possible to the belt velocity. At the same time, it is necessary to limit the impact pressure due to the change in momentum of the bulk solid as it feeds onto the belt. 2 refs., 8 figs.

  17. Gap-related trapped magnetic flux dependence between single and combined bulk superconductors

    International Nuclear Information System (INIS)

    Deng, Z.; Miki, M.; Felder, B.; Tsuzuki, K.; Shinohara, N.; Uetake, T.; Izumi, M.

    2011-01-01

    Highlights: → Rectangular YBCO bulks to realize a compact combination. → The gap effect was added to consider in the trapped flux density mapping. → The trapped-flux dependence between single and combined bulks is gap related. → It is possible to estimate the total magnetic flux of bulk combinations. - Abstract: Aiming at examining the trapped-flux dependence between single and combined bulk superconductors for field-pole applications, three rectangular Y 1.65 Ba 2 Cu 3 O 7-x (YBCO) bulks with a possibly compact combination were employed to investigate the trapped-flux characteristics of single and combined bulks with a field-cooling magnetization (FCM) method. A gap-related dependence was found between them. At lower gaps of 1 mm and 5 mm, the peak trapped fields and total magnetic flux of combined bulks are both smaller than the additive values of each single bulk, which can be ascribed to the demagnetization influences of the field around the bulk generated by the adjacent ones. While, at larger gaps like 10 mm, the situation becomes reversed. The combined bulks can attain bigger peak trapped fields as well as total magnetic flux, which indicates that the magnetic field by the bulk combination can reach higher gaps, thanks to the bigger magnetic energy compared with the single bulk. The presented results show that, on one hand, it is possible to estimate the total trapped magnetic flux of combined bulks by an approximate additive method of each single bulk while considering a demagnetization factor; on the other hand, it also means that the performance of combined bulks will be superior to the addition of each single bulk at larger gaps, thus preferable for large-scaled magnet applications.

  18. Gap-related trapped magnetic flux dependence between single and combined bulk superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Z., E-mail: zgdeng@gmail.co [Laboratory of Applied Physics, Department of Marine Electronics and Mechanical Engineering, Tokyo University of Marine Science and Technology, Tokyo 135-8533 (Japan); Miki, M.; Felder, B.; Tsuzuki, K.; Shinohara, N.; Uetake, T.; Izumi, M. [Laboratory of Applied Physics, Department of Marine Electronics and Mechanical Engineering, Tokyo University of Marine Science and Technology, Tokyo 135-8533 (Japan)

    2011-05-15

    Highlights: {yields} Rectangular YBCO bulks to realize a compact combination. {yields} The gap effect was added to consider in the trapped flux density mapping. {yields} The trapped-flux dependence between single and combined bulks is gap related. {yields} It is possible to estimate the total magnetic flux of bulk combinations. - Abstract: Aiming at examining the trapped-flux dependence between single and combined bulk superconductors for field-pole applications, three rectangular Y{sub 1.65}Ba{sub 2}Cu{sub 3}O{sub 7-x} (YBCO) bulks with a possibly compact combination were employed to investigate the trapped-flux characteristics of single and combined bulks with a field-cooling magnetization (FCM) method. A gap-related dependence was found between them. At lower gaps of 1 mm and 5 mm, the peak trapped fields and total magnetic flux of combined bulks are both smaller than the additive values of each single bulk, which can be ascribed to the demagnetization influences of the field around the bulk generated by the adjacent ones. While, at larger gaps like 10 mm, the situation becomes reversed. The combined bulks can attain bigger peak trapped fields as well as total magnetic flux, which indicates that the magnetic field by the bulk combination can reach higher gaps, thanks to the bigger magnetic energy compared with the single bulk. The presented results show that, on one hand, it is possible to estimate the total trapped magnetic flux of combined bulks by an approximate additive method of each single bulk while considering a demagnetization factor; on the other hand, it also means that the performance of combined bulks will be superior to the addition of each single bulk at larger gaps, thus preferable for large-scaled magnet applications.

  19. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  20. Onset of bulk pinning in BSCCO single crystals

    Science.gov (United States)

    van der Beek, C. J.; Indenbom, M. V.; Berseth, V.; Li, T. W.; Benoit, W.

    1996-11-01

    The long growth defects often found in Bi2Sr2CaCu2O8, “single” crystals effectively weaken the geometrical barrier and lower the field of first flux penetration. This means that the intrinsic (bulk) magnetic properties can be more easily accessed using magnetic measurements. Thus, the onset of strong bulk flux pinning in the sample bulk is determined to lie at T ≈ 40 K, indepedent of whether the field strength is above or below the field of the second peak in the magnetisation.

  1. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  2. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  3. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  4. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  5. 30 CFR 281.30 - Minimum royalty.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...

  6. The drift-diffusion interpretation of the electron current within the organic semiconductor characterized by the bulk single energy trap level

    Science.gov (United States)

    Cvikl, B.

    2010-01-01

    The closed solution for the internal electric field and the total charge density derived in the drift-diffusion approximation for the model of a single layer organic semiconductor structure characterized by the bulk shallow single trap-charge energy level is presented. The solutions for two examples of electric field boundary conditions are tested on room temperature current density-voltage data of the electron conducting aluminum/tris(8-hydroxyquinoline aluminum/calcium structure [W. Brütting et al., Synth. Met. 122, 99 (2001)] for which jexp∝Va3.4, within the interval of bias 0.4 V≤Va≤7. In each case investigated the apparent electron mobility determined at given bias is distributed within a given, finite interval of values. The bias dependence of the logarithm of their lower limit, i.e., their minimum values, is found to be in each case, to a good approximation, proportional to the square root of the applied electric field. On account of the bias dependence as incorporated in the minimum value of the apparent electron mobility the spatial distribution of the organic bulk electric field as well as the total charge density turn out to be bias independent. The first case investigated is based on the boundary condition of zero electric field at the electron injection interface. It is shown that for minimum valued apparent mobilities, the strong but finite accumulation of electrons close to the anode is obtained, which characterize the inverted space charge limited current (SCLC) effect. The second example refers to the internal electric field allowing for self-adjustment of its boundary values. The total electron charge density is than found typically to be of U shape, which may, depending on the parameters, peak at both or at either Alq3 boundary. It is this example in which the proper SCLC effect is consequently predicted. In each of the above two cases, the calculations predict the minimum values of the electron apparent mobility, which substantially

  7. State cigarette minimum price laws - United States, 2009.

    Science.gov (United States)

    2010-04-09

    Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.

  8. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    Science.gov (United States)

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  9. On the bulk viscosity of relativistic matter

    International Nuclear Information System (INIS)

    Canuto, V.; Hsieh, S.-H.

    1978-01-01

    An expression for the bulk viscosity coefficient in terms of the trace of the hydrodynamic energy-stress tensor is derived from the Kubo formula. This, along with a field-theoretic model of an interacting system of scalar particles, suggests that at high temperatures the bulk viscosity tends to zero, contrary to the often quoted resuls of Iso, Mori and Namiki. (author)

  10. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  11. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  12. Micro benchtop optics by bulk silicon micromachining

    Science.gov (United States)

    Lee, Abraham P.; Pocha, Michael D.; McConaghy, Charles F.; Deri, Robert J.

    2000-01-01

    Micromachining of bulk silicon utilizing the parallel etching characteristics of bulk silicon and integrating the parallel etch planes of silicon with silicon wafer bonding and impurity doping, enables the fabrication of on-chip optics with in situ aligned etched grooves for optical fibers, micro-lenses, photodiodes, and laser diodes. Other optical components that can be microfabricated and integrated include semi-transparent beam splitters, micro-optical scanners, pinholes, optical gratings, micro-optical filters, etc. Micromachining of bulk silicon utilizing the parallel etching characteristics thereof can be utilized to develop miniaturization of bio-instrumentation such as wavelength monitoring by fluorescence spectrometers, and other miniaturized optical systems such as Fabry-Perot interferometry for filtering of wavelengths, tunable cavity lasers, micro-holography modules, and wavelength splitters for optical communication systems.

  13. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  14. Gap-related trapped magnetic flux dependence between single and combined bulk superconductors

    Science.gov (United States)

    Deng, Z.; Miki, M.; Felder, B.; Tsuzuki, K.; Shinohara, N.; Uetake, T.; Izumi, M.

    2011-05-01

    Aiming at examining the trapped-flux dependence between single and combined bulk superconductors for field-pole applications, three rectangular Y 1.65Ba 2Cu 3O 7-x (YBCO) bulks with a possibly compact combination were employed to investigate the trapped-flux characteristics of single and combined bulks with a field-cooling magnetization (FCM) method. A gap-related dependence was found between them. At lower gaps of 1 mm and 5 mm, the peak trapped fields and total magnetic flux of combined bulks are both smaller than the additive values of each single bulk, which can be ascribed to the demagnetization influences of the field around the bulk generated by the adjacent ones. While, at larger gaps like 10 mm, the situation becomes reversed. The combined bulks can attain bigger peak trapped fields as well as total magnetic flux, which indicates that the magnetic field by the bulk combination can reach higher gaps, thanks to the bigger magnetic energy compared with the single bulk. The presented results show that, on one hand, it is possible to estimate the total trapped magnetic flux of combined bulks by an approximate additive method of each single bulk while considering a demagnetization factor; on the other hand, it also means that the performance of combined bulks will be superior to the addition of each single bulk at larger gaps, thus preferable for large-scaled magnet applications.

  15. Synthesis of Bulk Superconducting Magnesium Diboride

    Directory of Open Access Journals (Sweden)

    Margie Olbinado

    2002-06-01

    Full Text Available Bulk polycrystalline superconducting magnesium diboride, MgB2, samples were successfully prepared via a one-step sintering program at 750°C, in pre Argon with a pressure of 1atm. Both electrical resistivity and magnetic susceptibility measurements confirmed the superconductivity of the material at 39K, with a transition width of 5K. The polycrystalline nature, granular morphology, and composition of the sintered bulk material were confirmed using X-ray diffractometry (XRD, scanning electron microscopy (SEM, and energy dispersive X-ray analysis (EDX.

  16. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  17. 9 CFR 147.51 - Authorized laboratory minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Authorized laboratory minimum requirements. 147.51 Section 147.51 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE... Authorized Laboratories and Approved Tests § 147.51 Authorized laboratory minimum requirements. These minimum...

  18. Delocalization of brane gravity by a bulk black hole

    International Nuclear Information System (INIS)

    Seahra, Sanjeev S; Clarkson, Chris; Maartens, Roy

    2005-01-01

    We investigate the analogue of the Randall-Sundrum braneworld in the case when the bulk contains a black hole. Instead of the static vacuum Minkowski brane of the RS model, we have an Einstein static vacuum brane. We find that the presence of the bulk black hole has a dramatic effect on the gravity that is felt by brane observers. In the RS model, the 5D graviton has a stable localized zero mode that reproduces 4D gravity on the brane at low energies. With a bulk black hole, there is no such solution-gravity is delocalized by the 5D horizon. However, the brane does support a discrete spectrum of metastable massive bound states, or quasinormal modes, as was recently shown to be the case in the RS scenario. These states should dominate the high frequency component of the bulk gravity wave spectrum on a cosmological brane. We expect our results to generalize to any bulk spacetime containing a Killing horizon. (letter to the editor)

  19. Bond-diluted interface between semi-infinite Potts bulks: criticality

    International Nuclear Information System (INIS)

    Cavalcanti, S.B.; Tsallis, C.

    1986-01-01

    Within a real space renormalisation group framework, we discuss the criticality of a system constituted by two (not necessarily equal) semi-infinite ferromagnetic q-state Potts bulks separated by an interface. This interface is a bond-diluted Potts ferromagnet with a coupling constant which is in general different from those of both bulks. The phase diagram presents four physically different phases, namely the paramagnetic one, and the surface, single bulk and double bulk ferromagnetic ones. These various phases determine a multicritical surface which contains a higher order multicritical line. The critical concentration P c that is the concentration of the interface bonds which surface magnetic ordering is possible even if the bulks are disordered. An interesting feature comes out which is that P c varies continuously with J 1 /J s and J 2 /J s . The standard two-dimensional percolation concentration is recovered for J 1 =J 2 =0. (author) [pt

  20. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  1. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  2. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures.

    Science.gov (United States)

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-04-15

    The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  4. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  5. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  6. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  7. Effects of bulk charged impurities on the bulk and surface transport in three-dimensional topological insulators

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, B.; Chen, T.; Shklovskii, B. I., E-mail: shklovsk@physics.spa.umn.edu [University of Minnesota, Fine Theoretical Physics Institute (United States)

    2013-09-15

    In the three-dimensional topological insulator (TI), the physics of doped semiconductors exists literally side-by-side with the physics of ultrarelativistic Dirac fermions. This unusual pairing creates a novel playground for studying the interplay between disorder and electronic transport. In this mini-review, we focus on the disorder caused by the three-dimensionally distributed charged impurities that are ubiquitous in TIs, and we outline the effects it has on both the bulk and surface transport in TIs. We present self-consistent theories for Coulomb screening both in the bulk and at the surface, discuss the magnitude of the disorder potential in each case, and present results for the conductivity. In the bulk, where the band gap leads to thermally activated transport, we show how disorder leads to a smaller-than-expected activation energy that gives way to variable-range hopping at low temperatures. We confirm this enhanced conductivity with numerical simulations that also allow us to explore different degrees of impurity compensation. For the surface, where the TI has gapless Dirac modes, we present a theory of disorder and screening of deep impurities, and we calculate the corresponding zero-temperature conductivity. We also comment on the growth of the disorder potential in passing from the surface of the TI into the bulk. Finally, we discuss how the presence of a gap at the Dirac point, introduced by some source of time-reversal symmetry breaking, affects the disorder potential at the surface and the mid-gap density of states.

  8. Effects of bulk charged impurities on the bulk and surface transport in three-dimensional topological insulators

    International Nuclear Information System (INIS)

    Skinner, B.; Chen, T.; Shklovskii, B. I.

    2013-01-01

    In the three-dimensional topological insulator (TI), the physics of doped semiconductors exists literally side-by-side with the physics of ultrarelativistic Dirac fermions. This unusual pairing creates a novel playground for studying the interplay between disorder and electronic transport. In this mini-review, we focus on the disorder caused by the three-dimensionally distributed charged impurities that are ubiquitous in TIs, and we outline the effects it has on both the bulk and surface transport in TIs. We present self-consistent theories for Coulomb screening both in the bulk and at the surface, discuss the magnitude of the disorder potential in each case, and present results for the conductivity. In the bulk, where the band gap leads to thermally activated transport, we show how disorder leads to a smaller-than-expected activation energy that gives way to variable-range hopping at low temperatures. We confirm this enhanced conductivity with numerical simulations that also allow us to explore different degrees of impurity compensation. For the surface, where the TI has gapless Dirac modes, we present a theory of disorder and screening of deep impurities, and we calculate the corresponding zero-temperature conductivity. We also comment on the growth of the disorder potential in passing from the surface of the TI into the bulk. Finally, we discuss how the presence of a gap at the Dirac point, introduced by some source of time-reversal symmetry breaking, affects the disorder potential at the surface and the mid-gap density of states

  9. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  10. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  11. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  12. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  13. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  14. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  15. Bulk local states and crosscaps in holographic CFT

    Energy Technology Data Exchange (ETDEWEB)

    Nakayama, Yu [Department of Physics, Rikkyo University,Toshima, Tokyo 175-8501 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Ooguri, Hirosi [Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo,Kashiwa, Chiba 277-8583 (Japan); Walter Burke Institute for Theoretical Physics, California Institute of Technology,Pasadena, CA 91125 (United States); Center for Mathematical Sciences and Applications andCenter for the Fundamental Laws of Nature, Harvard University,Cambridge, MA 02138 (United States)

    2016-10-17

    In a weakly coupled gravity theory in the anti-de Sitter space, local states in the bulk are linear superpositions of Ishibashi states for a crosscap in the dual conformal field theory. The superposition structure can be constrained either by the microscopic causality in the bulk gravity or the bootstrap condition in the boundary conformal field theory. We show, contrary to some expectation, that these two conditions are not compatible to each other in the weak gravity regime. We also present an evidence to show that bulk local states in three dimensions are not organized by the Virasoro symmetry.

  16. Carbon nanotubes grown on bulk materials and methods for fabrication

    Science.gov (United States)

    Menchhofer, Paul A [Clinton, TN; Montgomery, Frederick C [Oak Ridge, TN; Baker, Frederick S [Oak Ridge, TN

    2011-11-08

    Disclosed are structures formed as bulk support media having carbon nanotubes formed therewith. The bulk support media may comprise fibers or particles and the fibers or particles may be formed from such materials as quartz, carbon, or activated carbon. Metal catalyst species are formed adjacent the surfaces of the bulk support material, and carbon nanotubes are grown adjacent the surfaces of the metal catalyst species. Methods employ metal salt solutions that may comprise iron salts such as iron chloride, aluminum salts such as aluminum chloride, or nickel salts such as nickel chloride. Carbon nanotubes may be separated from the carbon-based bulk support media and the metal catalyst species by using concentrated acids to oxidize the carbon-based bulk support media and the metal catalyst species.

  17. Minimum Price Guarantees In a Consumer Search Model

    NARCIS (Netherlands)

    M.C.W. Janssen (Maarten); A. Parakhonyak (Alexei)

    2009-01-01

    textabstractThis paper is the first to examine the effect of minimum price guarantees in a sequential search model. Minimum price guarantees are not advertised and only known to consumers when they come to the shop. We show that in such an environment, minimum price guarantees increase the value of

  18. Diffusion or bulk flow

    DEFF Research Database (Denmark)

    Schulz, Alexander

    2015-01-01

    is currently matter of discussion, called passive symplasmic loading. Based on the limited material available, this review compares the different loading modes and suggests that diffusion is the driving force in apoplasmic loaders, while bulk flow plays an increasing role in plants having a continuous...

  19. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  20. The impact of compaction, moisture content, particle size and type of bulking agent on initial physical properties of sludge-bulking agent mixtures before composting.

    Science.gov (United States)

    Huet, J; Druilhe, C; Trémier, A; Benoist, J C; Debenest, G

    2012-06-01

    This study aimed to experimentally acquire evolution profiles between depth, bulk density, Free Air Space (FAS), air permeability and thermal conductivity in initial composting materials. The impact of two different moisture content, two particle size and two types of bulking agent on these four parameters was also evaluated. Bulk density and thermal conductivity both increased with depth while FAS and air permeability both decreased with it. Moreover, depth and moisture content had a significant impact on almost all the four physical parameters contrary to particle size and the type of bulking agent. Copyright © 2012 Elsevier Ltd. All rights reserved.