WorldWideScience

Sample records for minimum variance bulk

  1. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  2. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  3. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  4. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  5. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  6. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  7. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  8. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  9. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  10. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  11. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  12. A comparison between temporal and subband minimum variance adaptive beamforming

    Science.gov (United States)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar

  13. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  14. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  15. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  16. An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound

    DEFF Research Database (Denmark)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt

    2016-01-01

    Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...

  17. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  18. Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2017-01-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...

  19. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  20. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Louis A; Mason, John J.

    2018-04-01

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.

  1. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...

  2. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  3. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  4. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  5. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  6. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...... ultrasound scanner for the data acquisition. The lateral resolution and the contrast obtained, are evaluated and compared with those from the conventional Delay-and-Sum (DAS) beamformer and the MV temporal implementation (MVT). From the wire phantom the Full-Width-at-Half-Maximum (FWHM) measured at a depth...

  7. A phantom study on temporal and subband Minimum Variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    This paper compares experimentally temporal and subband implementations of the Minimum Variance (MV) adaptive beamformer for medical ultrasound imaging. The performance of the two approaches is tested by comparing wire phantom measurements, obtained by the research ultrasound scanner SARUS. A 7 MHz...... BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...... from conventional Delay-and-Sum (DAS) beamformer. FWHM measured at the depth of 46.6 mm, is 0.02 mm (0.09λ) for both adaptive methods while the corresponding values for Hanning and Boxcar weights are 0.64 and 0.44 mm respectively. Between the MV beamformers a -2 dB difference in PSL is noticed in favor...

  8. Output Power Control of Wind Turbine Generator by Pitch Angle Control using Minimum Variance Control

    Science.gov (United States)

    Senjyu, Tomonobu; Sakamoto, Ryosei; Urasaki, Naomitsu; Higa, Hiroki; Uezato, Katsumi; Funabashi, Toshihisa

    In recent years, there have been problems such as exhaustion of fossil fuels, e. g., coal and oil, and environmental pollution resulting from consumption. Effective utilization of renewable energies such as wind energy is expected instead of the fossil fuel. Wind energy is not constant and windmill output is proportional to the cube of wind speed, which cause the generated power of wind turbine generators (WTGs) to fluctuate. In order to reduce fluctuating components, there is a method to control pitch angle of blades of the windmill. In this paper, output power leveling of wind turbine generator by pitch angle control using an adaptive control is proposed. A self-tuning regulator is used in adaptive control. The control input is determined by the minimum variance control. It is possible to compensate control input to alleviate generating power fluctuation with using proposed controller. The simulation results with using actual detailed model for wind power system show effectiveness of the proposed controller.

  9. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  10. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    Science.gov (United States)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  11. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  12. Effects of Important Parameters Variations on Computing Eigenspace-Based Minimum Variance Weights for Ultrasound Tissue Harmonic Imaging

    OpenAIRE

    Heidari, Mehdi Haji; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-01-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signa...

  13. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  14. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    Science.gov (United States)

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  15. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  17. A MAD Explanation for the Correlation between Bulk Lorentz Factor and Minimum Variability Timescale

    Science.gov (United States)

    Lloyd-Ronning, Nicole; Lei, Wei-hua; Xie, Wei

    2018-04-01

    We offer an explanation for the anti-correlation between the minimum variability timescale (MTS) in the prompt emission light curve of gamma-ray bursts (GRBs) and the estimated bulk Lorentz factor of these GRBs, in the context of a magnetically arrested disk (MAD) model. In particular, we show that previously derived limits on the maximum available energy per baryon in a Blandford-Znajek jet leads to a relationship between the characteristic MAD timescale in GRBs and the maximum bulk Lorentz factor: tMAD∝Γ-6, somewhat steeper than (although within the error bars of) the fitted relationship found in the GRB data. Similarly, the MAD model also naturally accounts for the observed anti-correlation between MTS and gamma-ray luminosity L in the GRB data, and we estimate the accretion rates of the GRB disk (given these luminosities) in the context of this model. Both of these correlations (MTS - Γ and MTS - L) are also observed in the AGN data, and we discuss the implications of our results in the context of both GRB and blazar systems.

  18. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  20. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  1. STUDY LINKS SOLVING THE MAXIMUM TASK OF LINEAR CONVOLUTION «EXPECTED RETURNS-VARIANCE» AND THE MINIMUM VARIANCE WITH RESTRICTIONS ON RETURNS

    Directory of Open Access Journals (Sweden)

    Maria S. Prokhorova

    2014-01-01

    Full Text Available The article deals with a study of problemsof finding the optimal portfolio securitiesusing convolutions expectation of portfolioreturns and portfolio variance. Value of thecoefficient of risk, in which the problem ofmaximizing the variance - limited yieldis equivalent to maximizing a linear convolution of criteria for «expected returns-variance» is obtained. An automated method for finding the optimal portfolio, onthe basis of which the results of the studydemonstrated is proposed.

  2. Acetabular Reconstruction with the Burch-Schneider Antiprotrusio Cage and Bulk Allografts: Minimum 10-Year Follow-Up Results

    Directory of Open Access Journals (Sweden)

    Dario Regis

    2014-01-01

    Full Text Available Reconstruction of severe pelvic bone loss is a challenging problem in hip revision surgery. Between January 1992 and December 2000, 97 hips with periprosthetic osteolysis underwent acetabular revision using bulk allografts and the Burch-Schneider antiprotrusio cage (APC. Twenty-nine patients (32 implants died for unrelated causes without additional surgery. Sixty-five hips were available for clinical and radiographic assessment at an average follow-up of 14.6 years (range, 10.0 to 18.9 years. There were 16 male and 49 female patients, aged from 29 to 83 (median, 60 years, with Paprosky IIIA (27 cases and IIIB (38 cases acetabular bone defects. Nine cages required rerevision because of infection (3, aseptic loosening (5, and flange breakage (1. The average Harris hip score improved from 33.1 points preoperatively to 75.6 points at follow-up (P<0.001. Radiographically, graft incorporation and cage stability were detected in 48 and 52 hips, respectively. The cumulative survival rates at 18.9 years with removal for any reason or X-ray migration of the cage and aseptic or radiographic loosening as the end points were 80.0% and 84.6%, respectively. The use of the Burch-Schneider APC and massive allografts is an effective technique for the reconstructive treatment of extensive acetabular bone loss with long-lasting survival.

  3. Controlled levitation of Y-Ba-Cu-O bulk superconductors and energy minimum analysis; Y-Ba-Cu-O baruku chodendotai no fujo to enerugi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Magashima, K. [Railway Technical Research Institute, Tokyo (Japan); Iwasa, Y. [Francis Bitter Magnet Laboratory, Canbridge (United States); Sawa, K. [keio University, Tokyo (Japan); Murakami, M. [Superconductivity research Laboratory, Tokyo (Japan)

    1999-11-25

    The levitation of bulk Y-Ba-Cu-O superconductors can be controlled using a Bi-Sr-Ca-Cu-O (Bi2223) superconducting electromagnet. It was found that stable levitation without tilting could be obtained only when the sample trapped a certain amount of fields, the minimum of which depended on the external field and sample dimensions. We employed a novel analysis method for levitation based on the total energy balance, which is much simpler than the force method and could be applied to understanding general levitation behavior. Numerical analyses thus developed showed that stable levitation of superconductors with large dimensions cen only be achieved when the induced currents can flow with three-dimensional freedom. (author)

  4. The solar and interplanetary causes of the recent minimum in geomagnetic activity (MGA23: a combination of midlatitude small coronal holes, low IMF BZ variances, low solar wind speeds and low solar magnetic fields

    Directory of Open Access Journals (Sweden)

    B. T. Tsurutani

    2011-05-01

    Full Text Available Minima in geomagnetic activity (MGA at Earth at the ends of SC23 and SC22 have been identified. The two MGAs (called MGA23 and MGA22, respectively were present in 2009 and 1997, delayed from the sunspot number minima in 2008 and 1996 by ~1/2–1 years. Part of the solar and interplanetary causes of the MGAs were exceptionally low solar (and thus low interplanetary magnetic fields. Another important factor in MGA23 was the disappearance of equatorial and low latitude coronal holes and the appearance of midlatitude coronal holes. The location of the holes relative to the ecliptic plane led to low solar wind speeds and low IMF (Bz variances (σBz2 and normalized variances (σBz2/B02 at Earth, with concomitant reduced solar wind-magnetospheric energy coupling. One result was the lowest ap indices in the history of ap recording. The results presented here are used to comment on the possible solar and interplanetary causes of the low geomagnetic activity that occurred during the Maunder Minimum.

  5. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  6. The measurement of moisture content and dry bulk-density of the top layer of agricultural soils, with minimum calibration, using a gamma-ray attenuation method

    International Nuclear Information System (INIS)

    Van der Westhuizen, M.; Van der Bank, D.J.; Meulke, M.

    1978-06-01

    Various methods of measuring moisture content and dry bulk-density of soil by means of gamma-ray attenuation are discussed. A new method is described in which the same parameters can be measured in consecutive determinations, but for which only one sample of unknown volume is needed for calibration. This method employs a radioactive source in a lead container in an aluminium tube in the soil. From the container the gamma rays follow a path at an angle upwards through the soil towards the detector. The method was tested in a number of experiments and the results are given in tables and graphs. The conclusion is that this method, which is fairly easy and quick to use, is accurate enough for most applications [af

  7. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  8. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  9. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  10. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  11. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  12. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  13. Diagnosis of Bearing System using Minimum Variance Cepstrum

    International Nuclear Information System (INIS)

    Lee, Jeong Han; Choi, Young Chul; Park, Jin Ho; Lee, Won Hyung; Kim, Chan Joong

    2005-01-01

    Various bearings are commonly used in rotating machines. The noise and vibration signals that can be obtained from the machines often convey the information of faults and these locations. Monitoring conditions for bearings have received considerable attention for many years, because the majority of problems in rotating machines are caused by faulty bearings. Thus failure alarm for the bearing system is often based on the detection of the onset of localized faults. Many methods are available for detecting faults in the bearing system. The majority of these methods assume that faults in bearings produce impulses. Impulse events can be attributed to bearing faults in the system. McFadden and Smith used the bandpass filter to filter the noise signal and then obtained the envelope by using the envelope detector. D. Ho and R. B Randall also tried envelope spectrum to detect faults in the bearing system, but it is very difficult to find resonant frequency in the noisy environments. S. -K. Lee and P. R. White used improved ANC (adaptive noise cancellation) to find faults. The basic idea of this technique is to remove the noise from the measured vibration signal, but they are not able to show the theoretical foundation of the proposed algorithms. Y.-H. Kim et al. used a moving window. This algorithm is quite powerful in the early detection of faults in a ball bearing system, but it is difficult to decide initial time and step size of the moving window. The early fault signal that is caused by microscopic cracks is commonly embedded in noise. Therefore, the success of detecting fault signal is completely determined by a method's ability to distinguish signal and noise. In 1969, Capon coined maximum likelihood (ML) spectra which estimate a mixed spectrum consisting of line spectrum, corresponding to a deterministic random process, plus arbitrary unknown continuous spectrum. The unique feature of these spectra is that it can detect sinusoidal signal from noise. Our idea essentially comes from this method. In this paper, a technique, which can detect impulse embedded in noise, is introduced. The theory of this technique is derived and the improved ability to detect the faults in a ball bearing system is demonstrated theoretically as well as experimentally

  14. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    , a 7 MHz, 128-element, phased array transducer with lambda/2-spacing was used. Data is obtained using a single element as the transmitting aperture and all 128 elements as the receiving aperture. A full SA sequence consisting of 128 emissions was simulated by gliding the active transmitting element...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...... across the array. Data for 13 point targets and a circular cyst with a radius of 5 mm were simulated. The performance of the MV beamformer is compared to DS using boxcar weights and Hanning weights, and is quantified by the Full Width at Half Maximum (FWHM) and the peak-side-lobe level (PSL). Single...

  15. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

  16. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  17. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  18. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  19. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  20. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  1. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  2. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  3. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to

  4. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  5. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  6. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  7. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  8. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  9. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  10. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  11. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  14. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  15. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  16. Bulk oil clauses

    International Nuclear Information System (INIS)

    Gough, N.

    1993-01-01

    The Institute Bulk Oil Clauses produced by the London market and the American SP-13c Clauses are examined in detail in this article. The duration and perils covered are discussed, and exclusions, adjustment clause 15 of the Institute Bulk Oil Clauses, Institute War Clauses (Cargo), and Institute Strikes Clauses (Bulk Oil) are outlined. (UK)

  17. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  18. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  19. Bulk Superconductors in Mobile Application

    Science.gov (United States)

    Werfel, F. N.; Delor, U. Floegel-; Rothfeld, R.; Riedel, T.; Wippich, D.; Goebel, B.; Schirrmeister, P.

    We investigate and review concepts of multi - seeded REBCO bulk superconductors in mobile application. ATZ's compact HTS bulk magnets can trap routinely 1 T@77 K. Except of magnetization, flux creep and hysteresis, industrial - like properties as compactness, power density, and robustness are of major device interest if mobility and light-weight construction is in focus. For mobile application in levitated trains or demonstrator magnets we examine the performance of on-board cryogenics either by LN2 or cryo-cooler application. The mechanical, electric and thermodynamical requirements of compact vacuum cryostats for Maglev train operation were studied systematically. More than 30 units are manufactured and tested. The attractive load to weight ratio is more than 10 and favours group module device constructions up to 5 t load on permanent magnet (PM) track. A transportable and compact YBCO bulk magnet cooled with in-situ 4 Watt Stirling cryo-cooler for 50 - 80 K operation is investigated. Low cooling power and effective HTS cold mass drives the system construction to a minimum - thermal loss and light-weight design.

  20. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  1. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  2. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  3. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  4. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  5. Large area bulk superconductors

    Science.gov (United States)

    Miller, Dean J.; Field, Michael B.

    2002-01-01

    A bulk superconductor having a thickness of not less than about 100 microns is carried by a polycrystalline textured substrate having misorientation angles at the surface thereof not greater than about 15.degree.; the bulk superconductor may have a thickness of not less than about 100 microns and a surface area of not less than about 50 cm.sup.2. The textured substrate may have a thickness not less than about 10 microns and misorientation angles at the surface thereof not greater than about 15.degree.. Also disclosed is a process of manufacturing the bulk superconductor and the polycrystalline biaxially textured substrate material.

  6. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  7. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  8. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  9. Minimum entropy production principle

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  10. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  11. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  12. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  13. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  14. A method for minimum risk portfolio optimization under hybrid uncertainty

    Science.gov (United States)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  15. Superductile bulk metallic glass

    International Nuclear Information System (INIS)

    Yao, K.F.; Ruan, F.; Yang, Y.Q.; Chen, N.

    2006-01-01

    Usually, monolithic bulk metallic glasses undergo inhomogeneous plastic deformation and exhibit poor ductility (<2%) at room temperature. We report a newly developed Pd-Si binary bulk metallic glass, which exhibits a uniform plastic deformation and a large plastic engineering strain of 82% and a plastic true strain of 170%, together with initial strain hardening, slight strain softening and final strain hardening characteristics. The uniform shear deformation and the ultrahigh plasticity are mainly attributed to strain hardening, which results from the nanoscale inhomogeneity due to liquid phase separation. The formed nanoscale inhomogeneity will hinder, deflect, and bifurcate the propagation of shear bands

  16. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  17. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  18. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  19. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  20. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  1. Auctioning Bulk Mobile Messages

    NARCIS (Netherlands)

    S. Meij (Simon); L-F. Pau (Louis-François); H.W.G.M. van Heck (Eric)

    2003-01-01

    textabstractThe search for enablers of continued growth of SMS traffic, as well as the take-off of the more diversified MMS message contents, open up for enterprises the potential of bulk use of mobile messaging , instead of essentially one-by-one use. In parallel, such enterprises or value added

  2. Diffusion or bulk flow

    DEFF Research Database (Denmark)

    Schulz, Alexander

    2015-01-01

    is currently matter of discussion, called passive symplasmic loading. Based on the limited material available, this review compares the different loading modes and suggests that diffusion is the driving force in apoplasmic loaders, while bulk flow plays an increasing role in plants having a continuous...

  3. Ferromagnetic bulk glassy alloys

    International Nuclear Information System (INIS)

    Inoue, Akihisa; Makino, Akihiro; Mizushima, Takao

    2000-01-01

    This paper deals with the review on the formation, thermal stability and magnetic properties of the Fe-based bulk glassy alloys in as-cast bulk and melt-spun ribbon forms. A large supercooled liquid region over 50 K before crystallization was obtained in Fe-(Al, Ga)-(P, C, B, Si), Fe-(Cr, Mo, Nb)-(Al, Ga)-(P, C, B) and (Fe, Co, Ni)-Zr-M-B (M=Ti, Hf, V, Nb, Ta, Cr, Mo and W) systems and bulk glassy alloys were produced in a thickness range below 2 mm for the Fe-(Al, Ga)-(P, C, B, Si) system and 6 mm for the Fe-Co-(Zr, Nb, Ta)-(Mo, W)-B system by copper-mold casting. The ring-shaped glassy Fe-(Al, Ga)-(P, C, B, Si) alloys exhibit much better soft magnetic properties as compared with the ring-shaped alloy made from the melt-spun ribbon because of the formation of the unique domain structure. The good combination of high glass-forming ability and good soft magnetic properties indicates the possibility of future development as a new bulk glassy magnetic material

  4. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  5. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  6. Characterisation of bulk solids

    Energy Technology Data Exchange (ETDEWEB)

    D. McGlinchey [Glasgow Caledonian University, Glasgow (United Kingdom). Centre for Industrial Bulk Solids Handling

    2005-07-01

    Handling of powders and bulk solids is a critical industrial technology across a broad spectrum of industries, including minerals processing. With contributions from leading authors in their respective fields, this book provides the reader with a sound understanding of the techniques, importance and application of particulate materials characterisation. It covers the fundamental characteristics of individual particles and bulk particulate materials, and includes discussion of a wide range of measurement techniques, and the use of material characteristics in design and industrial practice. Contents: Characterising particle properties; Powder mechanics and rheology; Characterisation for hopper and stockpile design; Fluidization behaviour; Characterisation for pneumatic conveyor design; Explosiblility; 'Designer' particle characteristics; Current industrial practice; and Future trends. 130 ills.

  7. Micromegas in a bulk

    International Nuclear Information System (INIS)

    Giomataris, I.; De Oliveira, R.; Andriamonje, S.; Aune, S.; Charpak, G.; Colas, P.; Fanourakis, G.; Ferrer, E.; Giganon, A.; Rebourgeard, Ph.; Salin, P.

    2006-01-01

    In this paper, we present a novel way to manufacture the bulk Micromegas detector. A simple process based on the Printed Circuit Board (PCB) technology is employed to produce the entire sensitive detector. Such a fabrication process could be extended to very large area detectors made by the industry. The low cost fabrication together with the robustness of the electrode materials will make it attractive for several applications ranging from particle physics and astrophysics to medicine

  8. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  9. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  10. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  11. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  12. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  13. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  14. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  15. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  16. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  17. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  18. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  19. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  20. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  1. Bulk-Fill Resin Composites

    DEFF Research Database (Denmark)

    Benetti, Ana Raquel; Havndrup-Pedersen, Cæcilie; Honoré, Daniel

    2015-01-01

    the restorative procedure. The aim of this study, therefore, was to compare the depth of cure, polymerization contraction, and gap formation in bulk-fill resin composites with those of a conventional resin composite. To achieve this, the depth of cure was assessed in accordance with the International Organization...... for Standardization 4049 standard, and the polymerization contraction was determined using the bonded-disc method. The gap formation was measured at the dentin margin of Class II cavities. Five bulk-fill resin composites were investigated: two high-viscosity (Tetric EvoCeram Bulk Fill, SonicFill) and three low......-viscosity (x-tra base, Venus Bulk Fill, SDR) materials. Compared with the conventional resin composite, the high-viscosity bulk-fill materials exhibited only a small increase (but significant for Tetric EvoCeram Bulk Fill) in depth of cure and polymerization contraction, whereas the low-viscosity bulk...

  2. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  3. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  4. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  5. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  6. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  7. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  8. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  9. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  10. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  11. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  12. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  13. Bulk muscles, loose cables.

    Science.gov (United States)

    Liyanage, Chamari R D G; Kodali, Venkata

    2014-10-17

    The accessibility and usage of body building supplements is on the rise with stronger internet marketing strategies by the industry. The dangers posed by the ingredients in them are underestimated. A healthy young man came to the emergency room with palpitations and feeling unwell. Initial history and clinical examination were non-contributory to find the cause. ECG showed atrial fibrillation. A detailed history for any over the counter or herbal medicine use confirmed that he was taking supplements to bulk muscle. One of the components in these supplements is yohimbine; the onset of symptoms coincided with the ingestion of this product and the patient is symptom free after stopping it. This report highlights the dangers to the public of consuming over the counter products with unknown ingredients and the consequential detrimental impact on health. 2014 BMJ Publishing Group Ltd.

  14. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  15. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  16. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  17. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  18. Microfabricated Bulk Piezoelectric Transformers

    Science.gov (United States)

    Barham, Oliver M.

    Piezoelectric voltage transformers (PTs) can be used to transform an input voltage into a different, required output voltage needed in electronic and electro- mechanical systems, among other varied uses. On the macro scale, they have been commercialized in electronics powering consumer laptop liquid crystal displays, and compete with an older, more prevalent technology, inductive electromagnetic volt- age transformers (EMTs). The present work investigates PTs on smaller size scales that are currently in the academic research sphere, with an eye towards applications including micro-robotics and other small-scale electronic and electromechanical sys- tems. PTs and EMTs are compared on the basis of power and energy density, with PTs trending towards higher values of power and energy density, comparatively, indicating their suitability for small-scale systems. Among PT topologies, bulk disc-type PTs, operating in their fundamental radial extension mode, and free-free beam PTs, operating in their fundamental length extensional mode, are good can- didates for microfabrication and are considered here. Analytical modeling based on the Extended Hamilton Method is used to predict device performance and integrate mechanical tethering as a boundary condition. This model differs from previous PT models in that the electric enthalpy is used to derive constituent equations of motion with Hamilton's Method, and therefore this approach is also more generally applica- ble to other piezoelectric systems outside of the present work. Prototype devices are microfabricated using a two mask process consisting of traditional photolithography combined with micropowder blasting, and are tested with various output electri- cal loads. 4mm diameter tethered disc PTs on the order of .002cm. 3 , two orders smaller than the bulk PT literature, had the followingperformance: a prototype with electrode area ratio (input area / output area) = 1 had peak gain of 2.3 (+/- 0.1), efficiency of 33 (+/- 0

  19. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  20. Developing bulk exchange spring magnets

    Science.gov (United States)

    Mccall, Scott K.; Kuntz, Joshua D.

    2017-06-27

    A method of making a bulk exchange spring magnet by providing a magnetically soft material, providing a hard magnetic material, and producing a composite of said magnetically soft material and said hard magnetic material to make the bulk exchange spring magnet. The step of producing a composite of magnetically soft material and hard magnetic material is accomplished by electrophoretic deposition of the magnetically soft material and the hard magnetic material to make the bulk exchange spring magnet.

  1. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  2. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-01

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  3. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  4. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  5. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  6. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  7. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  8. The Effect of Bulk Tachyon Field on the Dynamics of Geometrical Tachyon

    International Nuclear Information System (INIS)

    Papantonopoulos, Eleftherios; Pappa, Ioanna; Zamarias, Vassilios

    2007-01-01

    We study the dynamics of the geometrical tachyon field on an unstable D3-brane in the background of a bulk tachyon field of a D3-brane solution of Type-0 string theory. We find that the geometrical tachyon potential is modified by a function of the bulk tachyon and inflation occurs at weak string coupling, where the bulk tachyon condenses, near the top of the geometrical tachyon potential. We also find a late accelerating phase when the bulk tachyon asymptotes to zero and the geometrical tachyon field reaches the minimum of the potential

  9. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  10. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  11. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  12. Mining the bulk positron lifetime

    International Nuclear Information System (INIS)

    Aourag, H.; Guittom, A.

    2009-01-01

    We introduce a new approach to investigate the bulk positron lifetimes of new systems based on data-mining techniques. Through data mining of bulk positron lifetimes, we demonstrate the ability to predict the positron lifetimes of new semiconductors on the basis of available semiconductor data already studied. Informatics techniques have been applied to bulk positron lifetimes for different tetrahedrally bounded semiconductors in order to discover computational design rules. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  14. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  15. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  16. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  17. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co-movement...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...

  18. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  19. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  20. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  1. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  2. Modelling of bulk superconductor magnetization

    International Nuclear Information System (INIS)

    Ainslie, M D; Fujishiro, H

    2015-01-01

    This paper presents a topical review of the current state of the art in modelling the magnetization of bulk superconductors, including both (RE)BCO (where RE = rare earth or Y) and MgB 2 materials. Such modelling is a powerful tool to understand the physical mechanisms of their magnetization, to assist in interpretation of experimental results, and to predict the performance of practical bulk superconductor-based devices, which is particularly important as many superconducting applications head towards the commercialization stage of their development in the coming years. In addition to the analytical and numerical techniques currently used by researchers for modelling such materials, the commonly used practical techniques to magnetize bulk superconductors are summarized with a particular focus on pulsed field magnetization (PFM), which is promising as a compact, mobile and relatively inexpensive magnetizing technique. A number of numerical models developed to analyse the issues related to PFM and optimise the technique are described in detail, including understanding the dynamics of the magnetic flux penetration and the influence of material inhomogeneities, thermal properties, pulse duration, magnitude and shape, and the shape of the magnetization coil(s). The effect of externally applied magnetic fields in different configurations on the attenuation of the trapped field is also discussed. A number of novel and hybrid bulk superconductor structures are described, including improved thermal conductivity structures and ferromagnet–superconductor structures, which have been designed to overcome some of the issues related to bulk superconductors and their magnetization and enhance the intrinsic properties of bulk superconductors acting as trapped field magnets. Finally, the use of hollow bulk cylinders/tubes for shielding is analysed. (topical review)

  3. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  4. Bulk metallic glass matrix composites

    International Nuclear Information System (INIS)

    Choi-Yim, H.; Johnson, W.L.

    1997-01-01

    Composites with a bulk metallic glass matrix were synthesized and characterized. This was made possible by the recent development of bulk metallic glasses that exhibit high resistance to crystallization in the undercooled liquid state. In this letter, experimental methods for processing metallic glass composites are introduced. Three different bulk metallic glass forming alloys were used as the matrix materials. Both ceramics and metals were introduced as reinforcement into the metallic glass. The metallic glass matrix remained amorphous after adding up to a 30 vol% fraction of particles or short wires. X-ray diffraction patterns of the composites show only peaks from the second phase particles superimposed on the broad diffuse maxima from the amorphous phase. Optical micrographs reveal uniformly distributed particles in the matrix. The glass transition of the amorphous matrix and the crystallization behavior of the composites were studied by calorimetric methods. copyright 1997 American Institute of Physics

  5. Bulk viscosity and cosmological evolution

    International Nuclear Information System (INIS)

    Beesham, A.

    1996-01-01

    In a recent interesting paper, Pimentel and Diaz-Rivera (Nuovo Cimento B, 109(1994) 1317) have derived several solutions with bulk viscosity in homogeneous and isotropic cosmological models. They also discussed the properties of these solutions. In this paper the authors relate the solutions of Pimentel and Diaz-Rivera by simple transformations to previous solutions published in the literature, showing that all the solutions can be derived from the known existing ones. Drawbacks to these approaches of studying bulk viscosity are pointed out, and better approaches indicated

  6. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  7. Minimum Q Electrically Small Antennas

    DEFF Research Database (Denmark)

    Kim, O. S.

    2012-01-01

    Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....

  8. Zirconium based bulk metallic glasses

    International Nuclear Information System (INIS)

    Dey, G.K.; Neogy, S.; Savalia, R.T.; Tewari, R.; Srivastava, D.; Banerjee, S.

    2006-01-01

    Metallic glasses have come into prominence in recent times because their nanocrystalline atomic arrangement imparts many useful and unusual properties to these metallic solids. In this study, bulk glasses have been obtained in Zr based multicomponent alloy by induction melting these alloys in silica crucibles and casting these in form of rods 3 and 6 mm in diameter in a copper mould

  9. Longitudinal bulk acoustic mass sensor

    DEFF Research Database (Denmark)

    Hales, Jan Harry; Teva, Jordi; Boisen, Anja

    2009-01-01

    A polycrystalline silicon longitudinal bulk acoustic cantilever is fabricated and operated in air at 51 MHz. A mass sensitivity of 100 Hz/fg (1 fg=10(-15) g) is obtained from the preliminary experiments where a minute mass is deposited on the device by means of focused ion beam. The total noise...

  10. Bulk viscosity of molecular fluids

    Science.gov (United States)

    Jaeger, Frederike; Matar, Omar K.; Müller, Erich A.

    2018-05-01

    The bulk viscosity of molecular models of gases and liquids is determined by molecular simulations as a combination of a dilute gas contribution, arising due to the relaxation of internal degrees of freedom, and a configurational contribution, due to the presence of intermolecular interactions. The dilute gas contribution is evaluated using experimental data for the relaxation times of vibrational and rotational degrees of freedom. The configurational part is calculated using Green-Kubo relations for the fluctuations of the pressure tensor obtained from equilibrium microcanonical molecular dynamics simulations. As a benchmark, the Lennard-Jones fluid is studied. Both atomistic and coarse-grained force fields for water, CO2, and n-decane are considered and tested for their accuracy, and where possible, compared to experimental data. The dilute gas contribution to the bulk viscosity is seen to be significant only in the cases when intramolecular relaxation times are in the μs range, and for low vibrational wave numbers (<1000 cm-1); This explains the abnormally high values of bulk viscosity reported for CO2. In all other cases studied, the dilute gas contribution is negligible and the configurational contribution dominates the overall behavior. In particular, the configurational term is responsible for the enhancement of the bulk viscosity near the critical point.

  11. Variance in elective surgery for chronic pancreatitis.

    Science.gov (United States)

    Shah, Nehal S; Siriwardena, Ajith K

    2009-01-08

    Evidence to guide selection of optimal surgical treatment for patients with painful chronic pancreatitis is limited. Baseline assessment data are limited and thus patients in different centres may be presenting at different stages of their illness. This study undertakes a systematic overview of reports of elective surgical intervention in chronic pancreatitis with particular reference to reporting of quality of life and baseline assessment and relation between disease and type of procedure. A computerised search of the PubMed, Embase and Cochrane databases was undertaken for the period January 1997 to March 2007 yielding 46 manuscripts providing data on 4,626 patients undergoing elective surgery for chronic pancreatitis. The median number of patients per study was 71 (range: 4-484). The median period for recruitment of patients was 10 years (range: 2-36 years). An externally validated quality of life questionnaire is reported in 8 (17.4%) of 46 manuscripts covering 441 (9.5%) of 4,626 patients. Formal comparison of pre-operative and post-operative pain scores was provided in 15 (32.6%) of manuscripts. Only seven (15.2%) reports provide a formal rationale or indication for selection of the type of elective surgical procedure for a stated disease variant and these papers cover 481 (10.4%) patients. In conclusion, this study demonstrates that there is a lack of standardization between units of the criteria for operative intervention in painful chronic pancreatitis. At a minimum, formal quality of life testing using a validated system should be undertaken in all patients prior to elective surgery for painful chronic pancreatitis.

  12. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  13. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  14. Fermat and the Minimum Principle

    Indian Academy of Sciences (India)

    Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...

  15. Coupling between minimum scattering antennas

    DEFF Research Database (Denmark)

    Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans

    1974-01-01

    Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...

  16. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  17. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  18. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  19. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  20. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  1. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  2. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  3. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  4. Coulombic Fluids Bulk and Interfaces

    CERN Document Server

    Freyland, Werner

    2011-01-01

    Ionic liquids have attracted considerable interest in recent years. In this book the bulk and interfacial physico-chemical characteristics of various fluid systems dominated by Coulomb interactions are treated which includes molten salts, ionic liquids as well as metal-molten salt mixtures and expanded fluid metals. Of particular interest is the comparison of the different systems. Topics in the bulk phase concern the microscopic structure, the phase behaviour and critical phenomena, and the metal-nonmetal transition. Interfacial phenomena include wetting transitions, electrowetting, surface freezing, and the electrified ionic liquid/ electrode interface. With regard to the latter 2D and 3D electrochemical phase formation of metals and semi-conductors on the nanometer scale is described for a number of selected examples. The basic concepts and various experimental methods are introduced making the book suitable for both graduate students and researchers interested in Coulombic fluids.

  5. Quantum mechanics the theoretical minimum

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.

  6. Minimum resolvable power contrast model

    Science.gov (United States)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  7. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  8. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  9. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  10. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  11. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  12. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  13. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  14. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  15. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  16. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  17. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  18. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  19. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  20. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  1. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  2. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  3. Comparison of bulk Micromegas with different amplification gaps

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Purba, E-mail: purba.bhattacharya@saha.ac.in [Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Bhattacharya, Sudeb [Emeritus Scientist (CSIR), Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Majumdar, Nayana; Mukhopadhyay, Supratik; Sarkar, Sandip [Applied Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Colas, Paul; Attie, David [DSM/IRFU, CEA/Saclay, F-91191 Gif-sur-Yvette CEDEX (France)

    2013-12-21

    The bulk Micromegas detector is considered to be a promising candidate for building TPCs for several future experiments including the projected linear collider. The standard bulk with a spacing of 128μm has already established itself as a good choice for its performances in terms of gas gain uniformity, energy and space point resolution, and its capability to efficiently pave large readout surfaces with minimum dead zone. The present work involves the comparison of this standard bulk with a relatively less used bulk Micromegas detector having a larger amplification gap of 192μm. Detector gain, energy resolution and electron transparency of these Micromegas have been measured under different conditions in various Argon-based gas mixtures to evaluate their performance. These measured characteristics have also been compared in detail to numerical simulations using the Garfield framework that combines packages such as neBEM, Magboltz and Heed. Further, we have carried out another numerical study to determine the effect of dielectric spacers on different detector features. A comprehensive comparison of the two detectors has been presented and analyzed in this work. -- Highlights: •We present a comparative study between bulk Micromegas having different amplification gaps. •Various detector characteristics such as gain, electron transparency, energy resolution have been measured experimentally. •Successful comparisons of these measured data with the simulation results indicate that the device physics is quite well understood. •A numerical study to determine the effect of dielectric spacers on different detect or features has been carried out.

  4. Ordered bulk degradation via autophagy

    DEFF Research Database (Denmark)

    Dengjel, Jörn; Kristensen, Anders Riis; Andersen, Jens S

    2008-01-01

    During amino acid starvation, cells undergo macroautophagy which is regarded as an unspecific bulk degradation process. Lately, more and more organelle-specific autophagy subtypes such as reticulophagy, mitophagy and ribophagy have been described and it could be shown, depending on the experimental...... at proteasomal and lysosomal degradation ample cross-talk between the two degradation pathways became evident. Degradation via autophagy appeared to be ordered and regulated at the protein complex/organelle level. This raises several important questions such as: can macroautophagy itself be specific and what...

  5. Correlations Between Magnetic Flux and Levitation Force of HTS Bulk Above a Permanent Magnet Guideway

    Science.gov (United States)

    Huang, Huan; Zheng, Jun; Zheng, Botian; Qian, Nan; Li, Haitao; Li, Jipeng; Deng, Zigang

    2017-10-01

    In order to clarify the correlations between magnetic flux and levitation force of the high-temperature superconducting (HTS) bulk, we measured the magnetic flux density on bottom and top surfaces of a bulk superconductor while vertically moving above a permanent magnet guideway (PMG). The levitation force of the bulk superconductor was measured simultaneously. In this study, the HTS bulk was moved down and up for three times between field-cooling position and working position above the PMG, followed by a relaxation measurement of 300 s at the minimum height position. During the whole processes, the magnetic flux density and levitation force of the bulk superconductor were recorded and collected by a multipoint magnetic field measurement platform and a self-developed maglev measurement system, respectively. The magnetic flux density on the bottom surface reflected the induced field in the superconductor bulk, while on the top, it reveals the penetrated magnetic flux. The results show that the magnetic flux density and levitation force of the bulk superconductor are in direct correlation from the viewpoint of inner supercurrent. In general, this work is instructive for understanding the connection of the magnetic flux density, the inner current density and the levitation behavior of HTS bulk employed in a maglev system. Meanwhile, this magnetic flux density measurement method has enriched present experimental evaluation methods of maglev system.

  6. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  7. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  8. The minimum yield in channeling

    International Nuclear Information System (INIS)

    Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.

    2000-01-01

    A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation

  9. Minimum Bias Trigger in ATLAS

    International Nuclear Information System (INIS)

    Kwee, Regina

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)

  10. Microhardness of bulk-fill composite materials

    OpenAIRE

    Kelić, Katarina; Matić, Sanja; Marović, Danijela; Klarić, Eva; Tarle, Zrinka

    2016-01-01

    The aim of the study was to determine microhardness of high- and low-viscosity bulk-fill composite resins and compare it with conventional composite materials. Four materials of high-viscosity were tested, including three bulk-fills: QuiXfi l (QF), x-tra fil (XTF) and Tetric EvoCeram Bulk Fill (TEBCF), while nanohybrid composite GrandioSO (GSO) served as control. The other four were low-viscosity composites, three bulk-fill materials: Smart Dentin Replacement (SDR), Venus Bulk Fill (VBF) and ...

  11. Handling of bulk solids theory and practice

    CERN Document Server

    Shamlou, P A

    1990-01-01

    Handling of Bulk Solids provides a comprehensive discussion of the field of solids flow and handling in the process industries. Presentation of the subject follows classical lines of separate discussions for each topic, so each chapter is self-contained and can be read on its own. Topics discussed include bulk solids flow and handling properties; pressure profiles in bulk solids storage vessels; the design of storage silos for reliable discharge of bulk materials; gravity flow of particulate materials from storage vessels; pneumatic transportation of bulk solids; and the hazards of solid-mater

  12. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  13. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  14. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  15. The Effect of Bulk Depth and Irradiation Time on the Surface Hardness and Degree of Cure of Bulk-Fill Composites

    Directory of Open Access Journals (Sweden)

    Farahat F

    2016-09-01

    Full Text Available Statement of Problem: For many years, application of the composite restoration with a thickness less than 2 mm for achieving the minimum polymerization contraction and stress has been accepted as a principle. But through the recent development in dental material a group of resin based composites (RBCs called Bulk Fill is introduced whose producers claim the possibility of achieving a good restoration in bulks with depths of 4 or even 5 mm. Objectives: To evaluate the effect of irradiation times and bulk depths on the degree of cure (DC of a bulk fill composite and compare it with the universal type. Materials and Methods: This study was conducted on two groups of dental RBCs including Tetric N Ceram Bulk Fill and Tetric N Ceram Universal. The composite samples were prepared in Teflon moulds with a diameter of 5 mm and height of 2, 4 and 6 mm. Then, half of the samples in each depth were cured from the upper side of the mould for 20s by LED light curing unit. The irradiation time for other specimens was 40s. After 24 hours of storage in distilled water, the microhardness of the top and bottom of the samples was measured using a Future Tech (Japan- Model FM 700 Vickers hardness testing machine. Data were analyzed statistically using the one and multi way ANOVAand Tukey’s test (p = 0.050. Results: The DC of Tetric N Ceram Bulk Fill in defined irradiation time and bulk depth was significantly more than the universal type (p < 0.001. Also, the DC of both composites studied was significantly (p < 0.001 reduced by increasing the bulk depths. Increasing the curing time from 20 to 40 seconds had a marginally significant effect (p ≤ 0.040 on the DC of both bulk fill and universal studied RBC samples. Conclusions: The DC of the investigated bulk fill composite was better than the universal type in all the irradiation times and bulk depths. The studied universal and bulk fill RBCs had an appropriate DC at the 2 and 4 mm bulk depths respectively and

  16. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  17. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  18. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  19. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  20. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  1. Advanced methods of analysis variance on scenarios of nuclear prospective

    International Nuclear Information System (INIS)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-01-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  2. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  3. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  4. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  5. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  6. Bulk handling benefits from ICT

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-11-15

    The efficiency and accuracy of bulk handling is being improved by the range of management information systems and services available today. As part of the program to extend Richards Bay Coal Terminal, Siemens is installing a manufacturing execution system which coordinates and monitors all movements of raw materials. The article also reports recent developments by AXSMarine, SunGuard Energy, Fuelworx and Railworx in providing integrated tools for tracking, managing and optimising solid/liquid fuels and rail car maintenance activities. QMASTOR Ltd. has secured a contract with Anglo Coal Australia to provide its Pit to Port.net{reg_sign} and iFuse{reg_sign} software systems across all their Australians sites, to include pit-to-product stockpile management. 2 figs.

  7. Bulk analysis using nuclear techniques

    International Nuclear Information System (INIS)

    Borsaru, M.; Holmes, R.J.; Mathew, P.J.

    1983-01-01

    Bulk analysis techniques developed for the mining industry are reviewed. Using penetrating neutron and #betta#-radiations, measurements are obtained directly from a large volume of sample (3-30 kg) #betta#-techniques were used to determine the grade of iron ore and to detect shale on conveyor belts. Thermal neutron irradiation was developed for the simultaneous determination of iron and aluminium in iron ore on a conveyor belt. Thermal-neutron activation analysis includes the determination of alumina in bauxite, and manganese and alumina in manganese ore. Fast neutron activation analysis is used to determine silicon in iron ores, and alumina and silica in bauxite. Fast and thermal neutron activation has been used to determine the soil in shredded sugar cane. (U.K.)

  8. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-11-09

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  9. Approximating the minimum cycle mean

    Directory of Open Access Journals (Sweden)

    Krishnendu Chatterjee

    2013-07-01

    Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.

  10. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-08

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  11. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong; Sundaramoorthi, Ganesh

    2017-01-01

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  12. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  13. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  14. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  15. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  16. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  17. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  18. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  19. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  20. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  1. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  2. Youth minimum wages and youth employment

    NARCIS (Netherlands)

    Marimpi, Maria; Koning, Pierre

    2018-01-01

    This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median

  3. Do Some Workers Have Minimum Wage Careers?

    Science.gov (United States)

    Carrington, William J.; Fallick, Bruce C.

    2001-01-01

    Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…

  4. Does the Minimum Wage Affect Welfare Caseloads?

    Science.gov (United States)

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  5. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  6. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  7. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  8. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  9. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  10. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  11. Cuspal Flexure and Extent of Cure of a Bulk-fill Flowable Base Composite.

    Science.gov (United States)

    Francis, A V; Braxton, A D; Ahmad, W; Tantbirojn, D; Simon, J F; Versluis, A

    2015-01-01

    To investigate a bulk-fill flowable base composite (Surefil SDR Flow) in terms of cuspal flexure and cure when used in incremental or bulk techniques. Mesio-occluso-distal cavities (4 mm deep, 4 mm wide) were prepared in 24 extracted molars. The slot-shaped cavities were etched, bonded, and restored in 1) two 2-mm increments Esthet-X HD (control), 2) two 2-mm increments Surefil SDR Flow, or 3) 4-mm bulk Surefil SDR Flow (N=8). The teeth were digitized after preparation (baseline) and restoration and were precisely aligned to calculate cuspal flexure. Restored teeth were placed in fuchsin dye for 16 hours to determine occlusal bond integrity from dye penetration. Extent of cure was assessed by hardness at 0.5-mm increments through the restoration depth. Results were analyzed with analysis of variance and Student-Newman-Keuls post hoc tests (α=0.05). Surefil SDR Flow, either incrementally or bulk filled, demonstrated significantly less cuspal flexure than Esthet-X HD. Dye penetration was less than 3% of cavity wall height and was not statistically different among groups. The hardness of Surefil SDR Flow did not change throughout the depth for both incrementally and bulk filled restorations; the hardness of Esthet-X HD was statistically significantly lower at the bottom of each increment than at the top. Filling in bulk or increments made no significant difference in marginal bond quality or cuspal flexure for the bulk-fill composite. However, the bulk-fill composite caused less cuspal flexure than the incrementally placed conventional composite. The bulk-fill composite cured all the way through (4 mm), whereas the conventional composite had lower cure at the bottom of each increment.

  12. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  13. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  14. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  15. Coupling brane fields to bulk supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Parameswaran, Susha L. [Uppsala Univ. (Sweden). Theoretical Physics; Schmidt, Jonas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-12-15

    In this note we present a simple, general prescription for coupling brane localized fields to bulk supergravity. We illustrate the procedure by considering 6D N=2 bulk supergravity on a 2D orbifold, with brane fields localized at the fixed points. The resulting action enjoys the full 6D N=2 symmetries in the bulk, and those of 4D N=1 supergravity at the brane positions. (orig.)

  16. Longitudinal and bulk viscosities of expanded rubidium

    International Nuclear Information System (INIS)

    Zaheri, Ali Hossein Mohammad; Srivastava, Sunita; Tankeshwar, K

    2003-01-01

    First three non-vanishing sum rules for the bulk and longitudinal stress auto-correlation functions have been evaluated for liquid Rb at six thermodynamic states along the liquid-vapour coexistence curve. The Mori memory function formalism and the frequency sum rules have been used to calculate bulk and longitudinal viscosities. The results thus obtained for the ratio of bulk viscosity to shear viscosity have been compared with experimental and other theoretical predictions wherever available. The values of the bulk viscosity have been found to be more than the corresponding values of the shear viscosity for all six thermodynamic states investigated here

  17. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  18. Nanopatterned Bulk Metallic Glass Biosensors.

    Science.gov (United States)

    Kinser, Emily R; Padmanabhan, Jagannath; Yu, Roy; Corona, Sydney L; Li, Jinyang; Vaddiraju, Sagar; Legassey, Allen; Loye, Ayomiposi; Balestrini, Jenna; Solly, Dawson A; Schroers, Jan; Taylor, André D; Papadimitrakopoulos, Fotios; Herzog, Raimund I; Kyriakides, Themis R

    2017-12-22

    Nanopatterning as a surface area enhancement method has the potential to increase signal and sensitivity of biosensors. Platinum-based bulk metallic glass (Pt-BMG) is a biocompatible material with electrical properties conducive for biosensor electrode applications, which can be processed in air at comparably low temperatures to produce nonrandom topography at the nanoscale. Work presented here employs nanopatterned Pt-BMG electrodes functionalized with glucose oxidase enzyme to explore the impact of nonrandom and highly reproducible nanoscale surface area enhancement on glucose biosensor performance. Electrochemical measurements including cyclic voltammetry (CV) and amperometric voltammetry (AV) were completed to compare the performance of 200 nm Pt-BMG electrodes vs Flat Pt-BMG control electrodes. Glucose dosing response was studied in a range of 2 mM to 10 mM. Effective current density dynamic range for the 200 nm Pt-BMG was 10-12 times greater than that of the Flat BMG control. Nanopatterned electrode sensitivity was measured to be 3.28 μA/cm 2 /mM, which was also an order of magnitude greater than the flat electrode. These results suggest that nonrandom nanotopography is a scalable and customizable engineering tool which can be integrated with Pt-BMGs to produce biocompatible biosensors with enhanced signal and sensitivity.

  19. Aspects of silicon bulk lifetimes

    Science.gov (United States)

    Landsberg, P. T.

    1985-01-01

    The best lifetimes attained for bulk crytalline silicon as a function of doping concentrations are analyzed. It is assumed that the dopants which set the Fermi level do not contribute to the recombination traffic which is due to the unknown defect. This defect is assumed to have two charge states: neutral and negative, the neutral defect concentration is frozen-in at some temperature T sub f. The higher doping concentrations should include the band-band Auger effect by using a generalization of the Shockley-Read-Hall (SRH) mechanism. The generalization of the SRH mechanism is discussed. This formulation gives a straightforward procedure for incorporating both band-band and band-trap Auger effects in the SRH procedure. Two related questions arise in this context: (1) it may sometimes be useful to write the steady-state occupation probability of the traps implied by SRH procedure in a form which approximates to the Fermi-Dirac distribution; and (2) the effect on the SRH mechanism of spreading N sub t levels at one energy uniformly over a range of energies is discussed.

  20. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  1. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  2. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  3. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  4. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  5. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  6. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  7. Minimum Additive Waste Stabilization (MAWS)

    International Nuclear Information System (INIS)

    1994-02-01

    In the Minimum Additive Waste Stabilization(MAWS) concept, actual waste streams are utilized as additive resources for vitrification, which may contain the basic components (glass formers and fluxes) for making a suitable glass or glassy slag. If too much glass former is present, then the melt viscosity or temperature will be too high for processing; while if there is too much flux, then the durability may suffer. Therefore, there are optimum combinations of these two important classes of constituents depending on the criteria required. The challenge is to combine these resources in such a way that minimizes the use of non-waste additives yet yields a processable and durable final waste form for disposal. The benefit to this approach is that the volume of the final waste form is minimized (waste loading maximized) since little or no additives are used and vitrification itself results in volume reduction through evaporation of water, combustion of organics, and compaction of the solids into a non-porous glass. This implies a significant reduction in disposal costs due to volume reduction alone, and minimizes future risks/costs due to the long term durability and leach resistance of glass. This is accomplished by using integrated systems that are both cost-effective and produce an environmentally sound waste form for disposal. individual component technologies may include: vitrification; thermal destruction; soil washing; gas scrubbing/filtration; and, ion-exchange wastewater treatment. The particular combination of technologies will depend on the waste streams to be treated. At the heart of MAWS is vitrification technology, which incorporates all primary and secondary waste streams into a final, long-term, stabilized glass wasteform. The integrated technology approach, and view of waste streams as resources, is innovative yet practical to cost effectively treat a broad range of DOE mixed and low-level wastes

  8. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  9. Solution of the problem of the identified minimum for the tri-variate ...

    Indian Academy of Sciences (India)

    tified minimum is considered below has zero means, and distinct variances. The solution ... and a non-singular covariance matrix , where ij = ρij σi σj for i ...... (i) through (iv) above, we can use (4.29) to identify a2. 21. , a2. 31. , a2. 12. , a2. 32 uniquely. Now we consider (4.28). In this case, there are two possibilities: (A2. 1, B2.

  10. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  11. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  12. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  13. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  14. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  15. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Abdalla, M Sebawe

    2011-01-01

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  16. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  17. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  18. 27 CFR 20.191 - Bulk articles.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bulk articles. 20.191... Users of Specially Denatured Spirits Operations by Users § 20.191 Bulk articles. Users who convey articles in containers exceeding one gallon may provide the recipient with a photocopy of subpart G of this...

  19. On the bulk viscosity of relativistic matter

    International Nuclear Information System (INIS)

    Canuto, V.; Hsieh, S.-H.

    1978-01-01

    An expression for the bulk viscosity coefficient in terms of the trace of the hydrodynamic energy-stress tensor is derived from the Kubo formula. This, along with a field-theoretic model of an interacting system of scalar particles, suggests that at high temperatures the bulk viscosity tends to zero, contrary to the often quoted resuls of Iso, Mori and Namiki. (author)

  20. Bulk-viscosity-driven asymmetric inflationary universe

    International Nuclear Information System (INIS)

    Waga, I.; Lima, J.A.S.; Portugal, R.

    1987-01-01

    A primordial net bosinic charge is introduced in the context of the bulk-viscosity-driven inflationary models. The analysis is carried through a macroscopic point of view in the framework of the causal thermodynamic theory. The conditions for having exponetial and generalized inflation are obtained. A phenomenological expression for the bulk viscosity coefficient is also derived. (author) [pt

  1. Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances

    Institute of Scientific and Technical Information of China (English)

    QI Wen-Juan; ZHANG Peng; DENG Zi-Li

    2014-01-01

    This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.

  2. Minimum emittance of three-bend achromats

    International Nuclear Information System (INIS)

    Li Xiaoyu; Xu Gang

    2012-01-01

    The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)

  3. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  4. The minimum wage in the Czech enterprises

    OpenAIRE

    Eva Lajtkepová

    2010-01-01

    Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...

  5. A contribution to problems of clean transport of bulk materials

    Directory of Open Access Journals (Sweden)

    Fedora Jaroslav

    1996-03-01

    Full Text Available The lecture analyses the problem of development of the pipe conveyor with a rubber belt, the facitities of its application in the practice and environmental aspects resulting from its application. The pipe conveyor is a new perspective transport system. It enables ransporting bulk materials (coal, crushed, rock, coke, plant ash, fertilisers, limestones, time in a specific operations (power plants, heating plants.cellulose, salt, sugar, wheat and other materials with a minimum effect on the environment. The transported material is enclosed in the pipeline so that there is no escape of dust, smell or of the transported material itself. The lecture is aimed at: - the short description of the operating principle and design of the pipe conveyor which was developed in the firm Matador Púchov in cooperation with the firm TEDO, - the analysis of experiencie in working some pipe conveyors which were under operation for a certain

  6. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  7. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  8. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  9. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  10. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  11. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  12. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...

  13. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  14. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  15. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  16. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  17. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...

  18. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...

  19. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  20. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  1. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...

  2. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...

  3. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    ... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...

  4. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  5. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  6. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  7. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...

  8. Application of effective variance method for contamination monitor calibration

    International Nuclear Information System (INIS)

    Goncalez, O.L.; Freitas, I.S.M. de.

    1990-01-01

    In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)

  9. The VIX, the Variance Premium, and Expected Returns

    DEFF Research Database (Denmark)

    Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo

    2018-01-01

    . These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...

  10. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  11. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  12. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  13. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  14. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  15. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  16. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  17. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  18. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  19. Effects of Diversification of Assets on Mean and Variance | Jayeola ...

    African Journals Online (AJOL)

    Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...

  20. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  1. On zero variance Monte Carlo path-stretching schemes

    International Nuclear Information System (INIS)

    Lux, I.

    1983-01-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game

  2. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  3. The variance quadtree algorithm: use for spatial sampling design

    NARCIS (Netherlands)

    Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.

    2007-01-01

    Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary

  4. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  5. Variance component and heritability estimates of early growth traits ...

    African Journals Online (AJOL)

    as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.

  6. Variances in consumers prices of selected food Items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  7. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  8. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  9. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  10. Electrochemical corrosion behavior of carbon steel with bulk coating holidays

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With epoxy coal tar as the coating material, the electrochemical corrosion behavior of Q235 with different kinds of bulk coating holidays has been investigated with EIS (Electrochemical Impedance Spectroscopy) in a 3.5vol% NaCl aqueous solution.The area ratio of bulk coating holiday to total coating area of steel is 4.91%. The experimental results showed that at free corrosionpotential, the corrosion of carbon steel with disbonded coating holiday is heavier than that with broken holiday and disbonded & broken holiday with time; Moreover, the effectiveness of Cathodic Protection (CP) of carbon steel with broken holiday is better than that with disbonded holiday and disbonded & broken holiday on CP potential -850 mV (vs CSE). Further analysis indicated that the two main reasons for corrosion are electrolyte solution slowly penetrating the coating, and crevice corrosion at steel/coating interface near holidays. The ratio of impedance amplitude (Z) of different frequency to minimum frequency is defined as K value. The change rate of K with frequency is related to the type of coating holiday.

  11. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  12. Zero forcing parameters and minimum rank problems

    NARCIS (Netherlands)

    Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.

    2010-01-01

    The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero

  13. 30 CFR 281.30 - Minimum royalty.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...

  14. New Minimum Wage Research: A Symposium.

    Science.gov (United States)

    Ehrenberg, Ronald G.; And Others

    1992-01-01

    Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…

  15. Minimum Wage Effects in the Longer Run

    Science.gov (United States)

    Neumark, David; Nizalova, Olena

    2007-01-01

    Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…

  16. Bulk viscosity in holographic Lifshitz hydrodynamics

    International Nuclear Information System (INIS)

    Hoyos, Carlos; Kim, Bom Soo; Oz, Yaron

    2014-01-01

    We compute the bulk viscosity in holographic models dual to theories with Lifshitz scaling and/or hyperscaling violation, using a generalization of the bulk viscosity formula derived in arXiv:1103.1657 from the null focusing equation. We find that only a class of models with massive vector fields are truly Lifshitz scale invariant, and have a vanishing bulk viscosity. For other holographic models with scalars and/or massless vector fields we find a universal formula in terms of the dynamical exponent and the hyperscaling violation exponent

  17. Bulk and shear viscosities of the gluon plasma in a quasiparticle description

    CERN Document Server

    Bluhm, M; Redlich, K

    2011-01-01

    Bulk and shear viscosities of deconfined gluonic matter are investigated within an effective kinetic theory by describing the strongly interacting medium phenomenologically in terms of quasiparticle excitations with medium-dependent self-energies. In this approach, local conservation of energy and momentum follows from a Boltzmann-Vlasov type kinetic equation and guarantees thermodynamic self-consistency. We show that the resulting transport coefficients reproduce the parametric dependencies on temperature and coupling obtained in perturbative QCD at large temperatures and small running coupling. The extrapolation into the non-perturbative regime results in a decreasing specific shear viscosity with decreasing temperature, exhibiting a minimum in the vicinity of the deconfinement transition temperature, while the specific bulk viscosity is sizeable in this region falling off rapidly with increasing temperature. The temperature dependence of specific bulk and shear viscosities found within this quasiparticle d...

  18. Comparison of wet-only and bulk deposition at Chiang Mai (Thailand) based on rainwater chemical composition

    Science.gov (United States)

    Chantara, Somporn; Chunsuk, Nawarut

    The chemical composition of 122 rainwater samples collected daily from bulk and wet-only collectors in a sub-urban area of Chiang Mai (Thailand) during August 2005-July 2006 has been analyzed and compared to assess usability of a cheaper and less complex bulk collector over a sophisticated wet-only collector. Statistical analysis was performed on log-transformed daily rain amount and depositions of major ions for each collector type. The analysis of variance (ANOVA) test revealed that the amount of rainfall collected from a rain gauge, bulk collector and wet-only collector showed no significant difference ( ∝=0.05). The volume weight mean electro-conductivity (EC) values of bulk and wet-only samples were 0.69 and 0.65 mS/m, respectively. The average pH of the samples from both types of collectors was 5.5. Scatter plots between log-transformed depositions of specific ions obtained from bulk and wet-only samples showed high correlation ( r>0.91). Means of log-transformed bulk deposition were 14% (Na + and K +), 13% (Mg 2+), 7% (Ca 2+), 4% (NO 3-), 3% (SO 42- and Cl -) and 2% (NH 4+) higher than that of wet-only deposition. However, multivariate analysis of variance (MANOVA) revealed that ion depositions obtained from bulk and wet-only collectors were not significantly different ( ∝=0.05). Therefore, it was concluded that a bulk collector can be used instead of a wet-only collector in a sub-urban area.

  19. Bulk Leisure--Problem or Blessing?

    Science.gov (United States)

    Beland, Robert M.

    1983-01-01

    With an increasing number of the nation's work force experiencing "bulk leisure" time because of new work scheduling procedures, parks and recreation offices are encouraged to examine their program scheduling and content. (JM)

  20. Technical specifications for the bulk shielding reactor

    International Nuclear Information System (INIS)

    1986-05-01

    This report provides information concerning the technical specifications for the Bulk Shielding Reactor. Areas covered include: safety limits and limiting safety settings; limiting conditions for operation; surveillance requirements; design features; administrative controls; and monitoring of airborne effluents. 10 refs

  1. Force measurements for levitated bulk superconductors

    International Nuclear Information System (INIS)

    Tachi, Y.; Sawa, K.; Iwasa, Y.; Nagashima, K.; Otani, T.; Miyamoto, T.; Tomita, M.; Murakami, M.

    2000-01-01

    We have developed a force measurement system which enables us to directly measure the levitation force of levitated bulk superconductors. Experimental data of the levitation forces were compared with the results of numerical simulation based on the levitation model that we deduced in our previous paper. They were in fairly good agreement, which confirms that our levitation model can be applied to the force analyses for levitated bulk superconductors. (author)

  2. Force measurements for levitated bulk superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Tachi, Y. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan). E-mail: tachi at istec.or.jp; Uemura, N. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan); Sawa, K. [Department of Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama (Japan); Iwasa, Y. [Francis Bitter Magnet Laboratory, Massachusetts Institute of Technology, Cambridge, MA (United States); Nagashima, K. [Railway Technical Research Institute, Hikari-cho, Kokubunji-shi, Tokyo (Japan); Otani, T.; Miyamoto, T.; Tomita, M.; Murakami, M. [ISTEC, Superconductivity Research Laboratory, 1-16-25 Shibaura, Minato-ku, Tokyo (Japan)

    2000-06-01

    We have developed a force measurement system which enables us to directly measure the levitation force of levitated bulk superconductors. Experimental data of the levitation forces were compared with the results of numerical simulation based on the levitation model that we deduced in our previous paper. They were in fairly good agreement, which confirms that our levitation model can be applied to the force analyses for levitated bulk superconductors. (author)

  3. ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI

    Directory of Open Access Journals (Sweden)

    Azizul Khakim

    2015-10-01

    Full Text Available ABSTRAK ANALISIS KESELAMATAN TERMOHIDROLIK BULK SHIELDING REAKTOR KARTINI. Bulk shielding merupakan fasilitas yang terintegrasi dengan reaktor Kartini yang berfungsi sebagai penyimpanan sementara bahan bakar bekas. Fasilitas ini merupakan fasilitas yang termasuk dalam struktur, sistem dan komponen (SSK yang penting bagi keselamatan. Salah satu fungsi keselamatan dari sistem penanganan dan penyimpanan bahan bakar adalah mencegah kecelakaan kekritisan yang tak terkendali dan membatasi naiknya temperatur bahan bakar. Analisis keselamatan paling kurang harus mencakup analisis keselamatan dari sisi neutronik dan termo hidrolik Bulk shielding. Analisis termo hidrolik ditujukan untuk memastikan perpindahan panas dan proses pendinginan bahan bakar bekas berjalan baik dan tidak terjadi akumulasi panas yang mengancam integritas bahan bakar. Code tervalidasi PARET/ANL digunakan untuk analisis pendinginan dengan mode konveksi alam. Hasil perhitungan menunjukkan bahwa mode pendinginan konvekasi alam cukup memadai dalam mendinginkan panas sisa tanpa mengakibatkan kenaikan temperatur bahan bakar yang signifikan. Kata kunci: Bulk shielding, bahan bakar bekas, konveksi alam, PARET.   ABSTRACT THERMAL HYDRAULIC SAFETY ANALYSIS OF BULK SHIELDING KARTINI REACTOR. Bulk shielding is an integrated facility to Kartini reactor which is used for temporary spent fuels storage. The facility is one of the structures, systems and components (SSCs important to safety. Among the safety functions of fuel handling and storage are to prevent any uncontrolable criticality accidents and to limit the fuel temperature increase. Safety analyses should, at least, cover neutronic and thermal hydraulic calculations of the bulk shielding. Thermal hydraulic analyses were intended to ensure that heat removal and the process of the spent fuels cooling takes place adequately and no heat accumulation that challenges the fuel integrity. Validated code, PARET/ANL was used for analysing the

  4. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  5. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  6. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  7. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  8. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  9. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  10. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  11. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  12. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  13. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  14. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    Science.gov (United States)

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  15. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  16. The mean and variance of phylogenetic diversity under rarefaction

    OpenAIRE

    Nipperess, David A.; Matsen, Frederick A.

    2013-01-01

    Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...

  17. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  18. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  19. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  20. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  1. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  2. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  3. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  4. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  5. Minimum emittance in TBA and MBA lattices

    Science.gov (United States)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  6. Minimum emittance in TBA and MBA lattices

    International Nuclear Information System (INIS)

    Xu Gang; Peng Yuemei

    2015-01-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)

  7. Who Benefits from a Minimum Wage Increase?

    OpenAIRE

    John W. Lopresti; Kevin J. Mumford

    2015-01-01

    This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...

  8. Wage inequality, minimum wage effects and spillovers

    OpenAIRE

    Stewart, Mark B.

    2011-01-01

    This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...

  9. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  10. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  11. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  12. How unprecedented a solar minimum was it?

    Science.gov (United States)

    Russell, C T; Jian, L K; Luhmann, J G

    2013-05-01

    The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.

  13. Impact of the Minimum Wage on Compression.

    Science.gov (United States)

    Wolfe, Michael N.; Candland, Charles W.

    1979-01-01

    Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)

  14. Quantitative Research on the Minimum Wage

    Science.gov (United States)

    Goldfarb, Robert S.

    1975-01-01

    The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)

  15. Determining minimum lubrication film for machine parts

    Science.gov (United States)

    Hamrock, B. J.; Dowson, D.

    1978-01-01

    Formula predicts minimum film thickness required for fully-flooded ball bearings, gears, and cams. Formula is result of study to determine complete theoretical solution of isothermal elasto-hydrodynamic lubrication of fully-flooded elliptical contacts.

  16. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  17. The SME gauge sector with minimum length

    Energy Technology Data Exchange (ETDEWEB)

    Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)

    2017-12-15

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)

  18. The SME gauge sector with minimum length

    Science.gov (United States)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  19. Feed chute geometry for minimum belt wear

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, A W; Wiche, S J [University of Newcastle, Newcastle, NSW (Australia). Centre for Bulk Solids and Particulate Technologies

    1998-09-01

    The paper is concerned with the feeding and transfer of bulk solids in conveyor belt operation. The paper focuses on chute design where the objective is to prevent spillage and minimise both chute and belt wear. It is shown that these objectives may be met through correct dynamic design of the chute and by directing the flow of bulk solids onto the belt at an acceptable incidence angle. The aim is to match the tangential velocity component of the feed velocity as close as possible to the belt velocity. At the same time, it is necessary to limit the impact pressure due to the change in momentum of the bulk solid as it feeds onto the belt. 2 refs., 8 figs.

  20. Development of superconductor bulk for superconductor bearing

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Joong; Jun, Byung Hyuk; Park, Soon Dong (and others)

    2008-08-15

    Current carrying capacity is one of the most important issues in the consideration of superconductor bulk materials for engineering applications. There are numerous applications of Y-Ba-Cu-O (YBCO) bulk superconductors e.g. magnetic levitation train, flywheel energy storage system, levitation transportation, lunar telescope, centrifugal device, magnetic shielding materials, bulk magnets etc. Accordingly, to obtain YBCO materials in the form of large, single crystals without weak-link problem is necessary. A top seeded melt growth (TSMG) process was used to fabricate single crystal YBCO bulk superconductors. The seeded and infiltration growth (IG) technique was also very promising method for the synthesis of large, single-grain YBCO bulk superconductors with good superconducting properties. 5 wt.% Ag doped Y211 green compacts were sintered at 900 .deg. C {approx} 1200 .deg.C and then a single crystal YBCO was fabricated by an infiltration method. A refinement and uniform distribution of the Y211 particles in the Y123 matrix were achieved by sintering the Ag-doped samples. This enhancement of the critical current density was ascribable to a fine dispersion of the Y211 particles, a low porosity and the presence of Ag particles. In addition, we have designed and manufactured large YBCO single domain with levitation force of 10-13 kg/cm{sup 2} using TSMG processing technique.

  1. Module 13: Bulk Packaging Shipments by Highway

    International Nuclear Information System (INIS)

    Przybylski, J.L.

    1994-07-01

    The Hazardous Materials Modular Training Program provides participating United States Department of Energy (DOE) sites with a basic, yet comprehensive, hazardous materials transportation training program for use onsite. This program may be used to assist individual program entities to satisfy the general awareness, safety training, and function specific training requirements addressed in Code of Federal Regulation (CFR), Title 49, Part 172, Subpart H -- ''Training.'' Module 13 -- Bulk Packaging Shipments by Highway is a supplement to the Basic Hazardous Materials Workshop. Module 13 -- Bulk Packaging Shipments by Highway focuses on bulk shipments of hazardous materials by highway mode, which have additional or unique requirements beyond those addressed in the ten module core program. Attendance in this course of instruction should be limited to those individuals with work experience in transporting hazardous materials utilizing bulk packagings and who have completed the Basic Hazardous Materials Workshop or an equivalent. Participants will become familiar with the rules and regulations governing the transportation by highway of hazardous materials in bulk packagings and will demonstrate the application of these requirements through work projects and examination

  2. Fringe biasing: A variance reduction technique for optically thick meshes

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  3. Fringe biasing: A variance reduction technique for optically thick meshes

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R. P.

    2013-01-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  4. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  5. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    Science.gov (United States)

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  6. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  7. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  8. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  9. Diffusion-Based Trajectory Observers with Variance Constraints

    DEFF Research Database (Denmark)

    Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo

    Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...

  10. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  11. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  12. 77 FR 32444 - Minimum Internal Control Standards

    Science.gov (United States)

    2012-06-01

    ... definitions, add and amend existing definitions; amend the term ``variance'' as it applies to establishing an... and audit and accounting procedures into their respective sections. DATES: Submit comments on or... Commission agrees that it does not intend to limit the definition of charitable organizations to those with a...

  13. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  14. Micro benchtop optics by bulk silicon micromachining

    Science.gov (United States)

    Lee, Abraham P.; Pocha, Michael D.; McConaghy, Charles F.; Deri, Robert J.

    2000-01-01

    Micromachining of bulk silicon utilizing the parallel etching characteristics of bulk silicon and integrating the parallel etch planes of silicon with silicon wafer bonding and impurity doping, enables the fabrication of on-chip optics with in situ aligned etched grooves for optical fibers, micro-lenses, photodiodes, and laser diodes. Other optical components that can be microfabricated and integrated include semi-transparent beam splitters, micro-optical scanners, pinholes, optical gratings, micro-optical filters, etc. Micromachining of bulk silicon utilizing the parallel etching characteristics thereof can be utilized to develop miniaturization of bio-instrumentation such as wavelength monitoring by fluorescence spectrometers, and other miniaturized optical systems such as Fabry-Perot interferometry for filtering of wavelengths, tunable cavity lasers, micro-holography modules, and wavelength splitters for optical communication systems.

  15. Holographic bulk reconstruction with α' corrections

    Science.gov (United States)

    Roy, Shubho R.; Sarkar, Debajyoti

    2017-10-01

    We outline a holographic recipe to reconstruct α' corrections to anti-de Sitter (AdS) (quantum) gravity from an underlying CFT in the strictly planar limit (N →∞ ). Assuming that the boundary CFT can be solved in principle to all orders of the 't Hooft coupling λ , for scalar primary operators, the λ-1 expansion of the conformal dimensions can be mapped to higher curvature corrections of the dual bulk scalar field action. Furthermore, for the metric perturbations in the bulk, the AdS /CFT operator-field isomorphism forces these corrections to be of the Lovelock type. We demonstrate this by reconstructing the coefficient of the leading Lovelock correction, also known as the Gauss-Bonnet term in a bulk AdS gravity action using the expression of stress-tensor two-point function up to subleading order in λ-1.

  16. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  17. Room 305/2 of the unit 4 of the Chernobyl ChNPP: its condition, evaluation of the fuel bulk

    International Nuclear Information System (INIS)

    Borovoj, A.A.; Pazukhin, Eh.M.; Lagunenko, A.S.

    1998-01-01

    The question on the bulk of the spent nuclear fuel in the room 305/2 of the Unit 4 has been given consideration. On the basis of the results of direct observations, tele- and photo shooting, data of chemical analysis of samples and measurement of the maximum exposure dose rate on drilling detailed model of the main elements relative position in the former core has been developed. Minimum fuel bulk in the room 305/2 has been evaluated

  18. Big bang nucleosynthesis constraints on bulk neutrinos

    International Nuclear Information System (INIS)

    Goh, H.S.; Mohapatra, R.N.

    2002-01-01

    We examine the constraints imposed by the requirement of successful nucleosynthesis on models with one large extra hidden space dimension and a single bulk neutrino residing in this dimension. We solve the Boltzmann kinetic equation for the thermal distribution of the Kaluza-Klein modes and evaluate their contribution to the energy density at the big bang nucleosynthesis epoch to constrain the size of the extra dimension R -1 ≡μ and the parameter sin 2 2θ which characterizes the mixing between the active and bulk neutrinos

  19. Synthesis of Bulk Superconducting Magnesium Diboride

    Directory of Open Access Journals (Sweden)

    Margie Olbinado

    2002-06-01

    Full Text Available Bulk polycrystalline superconducting magnesium diboride, MgB2, samples were successfully prepared via a one-step sintering program at 750°C, in pre Argon with a pressure of 1atm. Both electrical resistivity and magnetic susceptibility measurements confirmed the superconductivity of the material at 39K, with a transition width of 5K. The polycrystalline nature, granular morphology, and composition of the sintered bulk material were confirmed using X-ray diffractometry (XRD, scanning electron microscopy (SEM, and energy dispersive X-ray analysis (EDX.

  20. Radiation-hardened bulk CMOS technology

    International Nuclear Information System (INIS)

    Dawes, W.R. Jr.; Habing, D.H.

    1979-01-01

    The evolutionary development of a radiation-hardened bulk CMOS technology is reviewed. The metal gate hardened CMOS status is summarized, including both radiation and reliability data. The development of a radiation-hardened bulk silicon gate process which was successfully implemented to a commercial microprocessor family and applied to a new, radiation-hardened, LSI standard cell family is also discussed. The cell family is reviewed and preliminary characterization data is presented. Finally, a brief comparison of the various radiation-hardened technologies with regard to performance, reliability, and availability is made

  1. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  2. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  3. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  4. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

    Science.gov (United States)

    Hamilton, Travis; Dade Lunsford, L

    2016-12-01

    OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

  5. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  6. Waste Isolation Pilot Plant no-migration variance petition

    International Nuclear Information System (INIS)

    1990-01-01

    Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989

  7. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  8. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  9. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  10. Concentration variance decay during magma mixing: a volcanic chronometer.

    Science.gov (United States)

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  11. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  12. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  13. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  14. The minimum wage in the Czech enterprises

    Directory of Open Access Journals (Sweden)

    Eva Lajtkepová

    2010-01-01

    Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.

  15. 46 CFR 148.04-23 - Unslaked lime in bulk.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Unslaked lime in bulk. 148.04-23 Section 148.04-23... HAZARDOUS MATERIALS IN BULK Special Additional Requirements for Certain Material § 148.04-23 Unslaked lime in bulk. (a) Unslaked lime in bulk must be transported in unmanned, all steel, double-hulled barges...

  16. Risk control and the minimum significant risk

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented

  17. 33 CFR 127.313 - Bulk storage.

    Science.gov (United States)

    2010-07-01

    ...) WATERFRONT FACILITIES WATERFRONT FACILITIES HANDLING LIQUEFIED NATURAL GAS AND LIQUEFIED HAZARDOUS GAS Waterfront Facilities Handling Liquefied Natural Gas Operations § 127.313 Bulk storage. (a) The operator...: (1) LNG. (2) LPG. (3) Vessel fuel. (4) Oily waste from vessels. (5) Solvents, lubricants, paints, and...

  18. Polymer-fullerene bulk heterojunction solar cells

    NARCIS (Netherlands)

    Janssen, RAJ; Hummelen, JC; Saricifti, NS

    Nanostructured phase-separated blends, or bulk heterojunctions, of conjugated Polymers and fullerene derivatives form a very attractive approach to large-area, solid-state organic solar cells.The key feature of these cells is that they combine easy, processing from solution on a variety of

  19. Bulk amorphous Mg-based alloys

    DEFF Research Database (Denmark)

    Pryds, Nini

    2004-01-01

    are discussed in this paper. On the basis of these measurements phase diagrams of the different systems were constructed. Finally, it is demonstrated that when pressing the bulk amorphous alloy onto a metallic dies at temperatures within the supercooled liquid region, the alloy faithfully replicates the surface...

  20. Longitudinal bulk a coustic mass sensor

    DEFF Research Database (Denmark)

    Hales, Jan Harry; Teva, Jordi; Boisen, Anja

    2009-01-01

    Design, fabrication and characterization, in terms of mass sensitivity, is presented for a polycrystalline silicon longitudinal bulk acoustic cantilever. The device is operated in air at 51 MHz, resulting in a mass sensitivity of 100 HZ/fg (1 fg = 10{su−15 g). The initial characterization is cond...

  1. Bulk viscosity in 2SC quark matter

    International Nuclear Information System (INIS)

    Alford, Mark G; Schmitt, Andreas

    2007-01-01

    The bulk viscosity of three-flavour colour-superconducting quark matter originating from the nonleptonic process u + s ↔ u + d is computed. It is assumed that up and down quarks form Cooper pairs while the strange quark remains unpaired (2SC phase). A general derivation of the rate of strangeness production is presented, involving contributions from a multitude of different subprocesses, including subprocesses that involve different numbers of gapped quarks as well as creation and annihilation of particles in the condensate. The rate is then used to compute the bulk viscosity as a function of the temperature, for an external oscillation frequency typical of a compact star r-mode. We find that, for temperatures far below the critical temperature T c for 2SC pairing, the bulk viscosity of colour-superconducting quark matter is suppressed relative to that of unpaired quark matter, but for T ∼> T c /30 the colour-superconducting quark matter has a higher bulk viscosity. This is potentially relevant for the suppression of r-mode instabilities early in the life of a compact star

  2. Combating wear in bulk solids handling plants

    Energy Technology Data Exchange (ETDEWEB)

    1986-01-01

    A total of five papers presented at a seminar on problems of wear caused by abrasive effects of materials in bulk handling. Topics of papers cover the designer viewpoint, practical experience from the steel, coal, cement and quarry industries to create an awareness of possible solutions.

  3. THE OPTIMIZATION OF PLUSH YARNS BULKING PROCESS

    Directory of Open Access Journals (Sweden)

    VINEREANU Adam

    2014-05-01

    Full Text Available This paper presents the experiments that were conducted on the installation of continuous bulking and thermofixing “SUPERBA” type TVP-2S for optimization of the plush yarns bulking process. There were considered plush yarns Nm 6.5/2, made of the fibrous blend of 50% indigenous wool sort 41 and 50% PES. In the first stage, it performs a thermal treatment with a turboprevaporizer at a temperature lower than thermofixing temperature, at atmospheric pressure, such that the plush yarns - deposed in a freely state on a belt conveyor - are uniformly bulking and contracting. It was followed the mathematical modeling procedure, working with a factorial program, rotatable central composite type, and two independent variables. After analyzing the parameters that have a direct influence on the bulking degree, there were selected the pre-vaporization temperature (coded x1,oC and the velocity of belt inside pre-vaporizer (coded x 2, m/min. As for the dependent variable, it was chosen the plush yarn diameter (coded y, mm. There were found the coordinates of the optimal point, and then this pair of values was verified in practice. These coordinates are: x1optim= 90oC and x 2optim= 6.5 m/min. The conclusion is that the goal was accomplished: it was obtained a good cover degree f or double-plush carpets by reducing the number of tufts per unit surface.

  4. Characteristics of bulk liquid undercooling and crystallization ...

    Indian Academy of Sciences (India)

    Characteristics of bulk liquid undercooling and crystallization behaviors ... cooling rate is fixed, the change of undercooling depends on the melt processing tem- ... solidification and a deep knowledge of undercooling of ... evolution, to obtain the information for the nucleation and ..... When cooling rate is fixed, the change.

  5. A stereoscopic look into the bulk

    Energy Technology Data Exchange (ETDEWEB)

    Czech, Bartłomiej; Lamprou, Lampros; McCandlish, Samuel; Mosk, Benjamin [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,Stanford, CA 94305 (United States); Sully, James [Theory Group, SLAC National Accelerator LaboratoryMenlo Park, CA 94025 (United States)

    2016-07-26

    We present the foundation for a holographic dictionary with depth perception. The dictionary consists of natural CFT operators whose duals are simple, diffeomorphism-invariant bulk operators. The CFT operators of interest are the “OPE blocks,” contributions to the OPE from a single conformal family. In holographic theories, we show that the OPE blocks are dual at leading order in 1/N to integrals of effective bulk fields along geodesics or homogeneous minimal surfaces in anti-de Sitter space. One widely studied example of an OPE block is the modular Hamiltonian, which is dual to the fluctuation in the area of a minimal surface. Thus, our operators pave the way for generalizing the Ryu-Takayanagi relation to other bulk fields. Although the OPE blocks are non-local operators in the CFT, they admit a simple geometric description as fields in kinematic space — the space of pairs of CFT points. We develop the tools for constructing local bulk operators in terms of these non-local objects. The OPE blocks also allow for conceptually clean and technically simple derivations of many results known in the literature, including linearized Einstein’s equations and the relation between conformal blocks and geodesic Witten diagrams.

  6. Bulk viscous cosmology in early Universe

    Indian Academy of Sciences (India)

    The effect of bulk viscosity on the early evolution of Universe for a spatially homogeneous and isotropic Robertson-Walker model is considered. Einstein's field equations are solved by using `gamma-law' equation of state = ( - 1)ρ, where the adiabatic parameter gamma () depends on the scale factor of the model.

  7. Failure by fracture in bulk metal forming

    DEFF Research Database (Denmark)

    Silva, C.M.A.; Alves, Luis M.; Nielsen, Chris Valentin

    2015-01-01

    This paper revisits formability in bulk metal forming in the light of fundamental concepts of plasticity,ductile damage and crack opening modes. It proposes a new test to appraise the accuracy, reliability and validity of fracture loci associated with crack opening by tension and out-of-plane shear...

  8. Hexaferrite multiferroics: from bulk to thick films

    Science.gov (United States)

    Koutzarova, T.; Ghelev, Ch; Peneva, P.; Georgieva, B.; Kolev, S.; Vertruyen, B.; Closset, R.

    2018-03-01

    We report studies of the structural and microstructural properties of Sr3Co2Fe24O41 in bulk form and as thick films. The precursor powders for the bulk form were prepared following the sol-gel auto-combustion method. The prepared pellets were synthesized at 1200 °C to produce Sr3Co2Fe24O41. The XRD spectra of the bulks showed the characteristic peaks corresponding to the Z-type hexaferrite structure as a main phase and second phases of CoFe2O4 and Sr3Fe2O7-x. The microstructure analysis of the cross-section of the bulk pellets revealed a hexagonal sheet structure. Large areas were observed of packages of hexagonal sheets where the separate hexagonal particles were ordered along the c axis. Sr3Co2Fe24O41 thick films were deposited from a suspension containing the Sr3Co2Fe24O41 powder. The microstructural analysis of the thick films showed that the particles had the perfect hexagonal shape typical for hexaferrites.

  9. Minimum qualifications for nuclear criticality safety professionals

    International Nuclear Information System (INIS)

    Ketzlach, N.

    1990-01-01

    A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals

  10. A minimum achievable PV electrical generating cost

    International Nuclear Information System (INIS)

    Sabisky, E.S.

    1996-01-01

    The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price

  11. Spatially tuned normalization explains attention modulation variance within neurons.

    Science.gov (United States)

    Ni, Amy M; Maunsell, John H R

    2017-09-01

    Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical

  12. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  13. Integration of bulk piezoelectric materials into microsystems

    Science.gov (United States)

    Aktakka, Ethem Erkan

    Bulk piezoelectric ceramics, compared to deposited piezoelectric thin-films, provide greater electromechanical coupling and charge capacity, which are highly desirable in many MEMS applications. In this thesis, a technology platform is developed for wafer-level integration of bulk piezoelectric substrates on silicon, with a final film thickness of 5-100microm. The characterized processes include reliable low-temperature (200°C) AuIn diffusion bonding and parylene bonding of bulk-PZT on silicon, wafer-level lapping of bulk-PZT with high-uniformity (+/-0.5microm), and low-damage micro-machining of PZT films via dicing-saw patterning, laser ablation, and wet-etching. Preservation of ferroelectric and piezoelectric properties is confirmed with hysteresis and piezo-response measurements. The introduced technology offers higher material quality and unique advantages in fabrication flexibility over existing piezoelectric film deposition methods. In order to confirm the preserved bulk properties in the final film, diaphragm and cantilever beam actuators operating in the transverse-mode are designed, fabricated and tested. The diaphragm structure and electrode shapes/sizes are optimized for maximum deflection through finite-element simulations. During tests of fabricated devices, greater than 12microm PP displacement is obtained by actuation of a 1mm2 diaphragm at 111kHz with integration of a 50-80% efficient power management IC, which incorporates a supply-independent bias circuitry, an active diode for low-dropout rectification, a bias-flip system for higher efficiency, and a trickle battery charger. The overall system does not require a pre-charged battery, and has power consumption of <1microW in active-mode (measured) and <5pA in sleep-mode (simulated). Under lg vibration at 155Hz, a 70mF ultra-capacitor is charged from OV to 1.85V in 50 minutes.

  14. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  15. Risk Management - Variance Minimization or Lower Tail Outcome Elimination

    DEFF Research Database (Denmark)

    Aabo, Tom

    2002-01-01

    on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....

  16. Draft no-migration variance petition. Volume 1

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is Volume 1 which discusses the regulatory frame work, site characterization, facility description, waste description, environmental impact analysis, monitoring, quality assurance, long-term compliance analysis, and regulatory compliance assessment

  17. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  18. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  19. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  20. Cosmic variance in inflation with two light scalars

    Energy Technology Data Exchange (ETDEWEB)

    Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah, E-mail: bpb165@psu.edu, E-mail: suddhasattwa.brahma@gmail.com, E-mail: asdeutsch@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos and Physics Department, The Pennsylvania State University, University Park, PA, 16802 (United States)

    2016-05-01

    We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.

  1. Discretization of space and time: determining the values of minimum length and minimum time

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .

  2. Variance, Violence, and Democracy: A Basic Microeconomic Model of Terrorism

    Directory of Open Access Journals (Sweden)

    John A. Sautter

    2010-01-01

    Full Text Available Much of the debate surrounding contemporary studies of terrorism focuses upon transnational terrorism. However, historical and contemporary evidence suggests that domestic terrorism is a more prevalent and pressing concern. A formal microeconomic model of terrorism is utilized here to understand acts of political violence in a domestic context within the domain of democratic governance.This article builds a very basic microeconomic model of terrorist decision making to hypothesize how a democratic government might influence the sorts of strategies that terrorists use. Mathematical models have been used to explain terrorist behavior in the past. However, the bulk of inquires in this area have only focused on the relationship between terrorists and the government, or amongst terrorists themselves. Central to the interpretation of the terrorist conflict presented here is the idea that voters (or citizens are also one of the important determinants of how a government will respond to acts of terrorism.

  3. MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.

    Science.gov (United States)

    Pennsylvania State Dept. of Public Instruction, Harrisburg.

    MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…

  4. Dirac's minimum degree condition restricted to claws

    NARCIS (Netherlands)

    Broersma, Haitze J.; Ryjacek, Z.; Schiermeyer, I.

    1997-01-01

    Let G be a graph on n 3 vertices. Dirac's minimum degree condition is the condition that all vertices of G have degree at least . This is a well-known sufficient condition for the existence of a Hamilton cycle in G. We give related sufficiency conditions for the existence of a Hamilton cycle or a

  5. 7 CFR 33.10 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... shipment of apples to any foreign destination unless: (a) Apples grade at least U.S. No. 1 or U.S. No. 1...

  6. Minimum Risk Pesticide: Definition and Product Confirmation

    Science.gov (United States)

    Minimum risk pesticides pose little to no risk to human health or the environment and therefore are not subject to regulation under FIFRA. EPA does not do any pre-market review for such products or labels, but violative products are subject to enforcement.

  7. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  8. Minimum maintenance solar pump | Assefa | Zede Journal

    African Journals Online (AJOL)

    A minimum maintenance solar pump (MMSP), Fig 1, has been simulated for Addis Ababa, taking solar meteorological data of global radiation, diffuse radiation and ambient air temperature as input to a computer program that has been developed. To increase the performance of the solar pump, by trapping the long-wave ...

  9. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  10. 7 CFR 35.13 - Minimum quantity.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... part, transport or receive for transportation to any foreign destination, a shipment of 25 packages or...

  11. Minimum impact house prototype for sustainable building

    NARCIS (Netherlands)

    Götz, E.; Klenner, K.; Lantelme, M.; Mohn, A.; Sauter, S.; Thöne, J.; Zellmann, E.; Drexler, H.; Jauslin, D.

    2010-01-01

    The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the

  12. 49 CFR 639.27 - Minimum criteria.

    Science.gov (United States)

    2010-10-01

    ... dollar value to any non-financial factors that are considered by using performance-based specifications..., DEPARTMENT OF TRANSPORTATION CAPITAL LEASES Cost-Effectiveness § 639.27 Minimum criteria. In making the... used where possible and appropriate: (a) Operation costs; (b) Reliability of service; (c) Maintenance...

  13. Computing nonsimple polygons of minimum perimeter

    NARCIS (Netherlands)

    Fekete, S.P.; Haas, A.; Hemmer, M.; Hoffmann, M.; Kostitsyna, I.; Krupke, D.; Maurer, F.; Mitchell, J.S.B.; Schmidt, A.; Schmidt, C.; Troegel, J.

    2018-01-01

    We consider the Minimum Perimeter Polygon Problem (MP3): for a given set V of points in the plane, find a polygon P with holes that has vertex set V , such that the total boundary length is smallest possible. The MP3 can be considered a natural geometric generalization of the Traveling Salesman

  14. Minimum-B mirrors plus EBT principles

    International Nuclear Information System (INIS)

    Yoshikawa, S.

    1983-01-01

    Electrons are heated at the minimum B location(s) created by the multipole field and the toroidal field. Resulting hot electrons can assist plasma confinement by (1) providing mirror, (2) creating azimuthally symmetric toroidal confinement, or (3) creating modified bumpy torus

  15. Completeness properties of the minimum uncertainty states

    Science.gov (United States)

    Trifonov, D. A.

    1993-01-01

    The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.

  16. Minimum Description Length Shape and Appearance Models

    DEFF Research Database (Denmark)

    Thodberg, Hans Henrik

    2003-01-01

    The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...

  17. Faster Fully-Dynamic minimum spanning forest

    DEFF Research Database (Denmark)

    Holm, Jacob; Rotenberg, Eva; Wulff-Nilsen, Christian

    2015-01-01

    We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in O(log4 n/log logn) expected amortized time per operation, improving the O(log4 n) amortized bound of Holm et al. (STOC’98, JACM’01).We also provide a deterministic data...

  18. Minimum Wage Effects throughout the Wage Distribution

    Science.gov (United States)

    Neumark, David; Schweitzer, Mark; Wascher, William

    2004-01-01

    This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…

  19. Asymptotics for the minimum covariance determinant estimator

    NARCIS (Netherlands)

    Butler, R.W.; Davies, P.L.; Jhun, M.

    1993-01-01

    Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown

  20. Bulk solitary waves in elastic solids

    Science.gov (United States)

    Samsonov, A. M.; Dreiden, G. V.; Semenova, I. V.; Shvartz, A. G.

    2015-10-01

    A short and object oriented conspectus of bulk solitary wave theory, numerical simulations and real experiments in condensed matter is given. Upon a brief description of the soliton history and development we focus on bulk solitary waves of strain, also known as waves of density and, sometimes, as elastic and/or acoustic solitons. We consider the problem of nonlinear bulk wave generation and detection in basic structural elements, rods, plates and shells, that are exhaustively studied and widely used in physics and engineering. However, it is mostly valid for linear elasticity, whereas dynamic nonlinear theory of these elements is still far from being completed. In order to show how the nonlinear waves can be used in various applications, we studied the solitary elastic wave propagation along lengthy wave guides, and remarkably small attenuation of elastic solitons was proven in physical experiments. Both theory and generation for strain soliton in a shell, however, remained unsolved problems until recently, and we consider in more details the nonlinear bulk wave propagation in a shell. We studied an axially symmetric deformation of an infinite nonlinearly elastic cylindrical shell without torsion. The problem for bulk longitudinal waves is shown to be reducible to the one equation, if a relation between transversal displacement and the longitudinal strain is found. It is found that both the 1+1D and even the 1+2D problems for long travelling waves in nonlinear solids can be reduced to the Weierstrass equation for elliptic functions, which provide the solitary wave solutions as appropriate limits. We show that the accuracy in the boundary conditions on free lateral surfaces is of crucial importance for solution, derive the only equation for longitudinal nonlinear strain wave and show, that the equation has, amongst others, a bidirectional solitary wave solution, which lead us to successful physical experiments. We observed first the compression solitary wave in the

  1. Planetary tides during the Maunder sunspot minimum

    International Nuclear Information System (INIS)

    Smythe, C.M.; Eddy, J.A.

    1977-01-01

    Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test

  2. Raman spectroscopic assessment of degree of conversion of bulk-fill resin composites--changes at 24 hours post cure.

    Science.gov (United States)

    Par, M; Gamulin, O; Marovic, D; Klaric, E; Tarle, Z

    2015-01-01

    The aim of this study was to determine degree of conversion (DC) of solid and flowable bulk-fill composites immediately and after 24 hours and investigate the variations of DC at surface and depths up to 4 mm. Eight bulk-fill composites (Tetric EvoCeram Bulk Fill [shades IVA and IVB], Quixfil, X-tra fil, Venus Bulk Fill, X-tra Base, SDR, Filtek Bulk Fill) were investigated, and two conventional composites (GrandioSO, X-Flow) were used as controls. The samples (n = 5) were cured for 20 seconds with irradiance of 1090 mW/cm(2). Raman spectroscopic measurements were made immediately after curing on sample surfaces and after 24 hours of dark storage at surface and at incremental depths up to 4 mm. Mean DC values were compared using repeated measures analysis of variance (ANOVA) and t-test for dependent samples. Surface DC values immediately after curing ranged from 59.1%-71.8%, while the 24-hour postcure values ranged from 71.3%-86.1%. A significant increase of DC was observed 24 hours post cure for all bulk-fill composites, which amounted from 11.3% to 16.9%. Decrease of DC through depths up to 4 mm varied widely among bulk-fill composites and ranged from 2.9% to 19.7%. All bulk-fill composites presented a considerable 24-hour postcure DC increase and clinically acceptable DC at depths up to 4 mm. Conventional control composites were sufficiently cured only up to 2 mm, despite significant postcure polymerization.

  3. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  4. Fluctuation effects in bulk polymer phase behavior

    International Nuclear Information System (INIS)

    Bates, F.S.; Rosedale, J.H.; Stepanek, P.; Lodge, T.P.; Wiltzius, P.; Hjelm R, Jr.; Fredrickson, G.H.

    1990-01-01

    Bulk polymer-polymer, and block copolymer, phase behaviors have traditionally been interpreted using mean-field theories. Recent small-angle neutron scattering (SANS) studies of critical phenomena in model binary polymer mixtures confirm that non-mean-field behavior is restricted to a narrow range of temperatures near the critical point, in close agreement with the Ginzburg criterion. In contrast, strong derivations from mean-field behavior are evident in SANS and rheological measurements on model block copolymers more than 50C above the order-disorder transition (ODT), which can be attributed to sizeable composition fluctuations. Such fluctuation effects undermine the mean-field assumption, conventionally applied to bulk polymers, and result in qualitative changes in phase behavior, such as the elimination of a thermodynamic stability limit in these materials. The influence of fluctuation effects on block copolymer and binary mixture phase behavior is compared and contrasted in this presentation

  5. Nuclear Matter Bulk Parameter Scales and Correlations

    International Nuclear Information System (INIS)

    Santos, B. M.; Delfino, A.; Dutra, M.; Lourenço, O.

    2015-01-01

    We study the arising of correlations among some isovector bulk parameters in nonrelativistic and relativistic hadronic mean-field models. For the former, we investigate correlations in the nonrelativistic (NR) limit of relativistic point-coupling models. We provide analytical correlations, for the NR limit model, between the symmetry energy and its derivatives, namely, the symmetry energy slope, curvature, skewness and fourth order derivative, discussing the conditions in which they are linear ones. We also show that some correlations presented in the NR limit model are reproduced for relativistic models presenting cubic and quartic self-interactions in its scalar field. As a direct application of such linear correlations, we remark its association with possible crossing points in the density dependence of the linearly correlated bulk parameter. (author)

  6. Structural determinants in the bulk heterojunction.

    Science.gov (United States)

    Acocella, Angela; Höfinger, Siegfried; Haunschmid, Ernst; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Yasui, Masato; Zerbetto, Francesco

    2018-02-21

    Photovoltaics is one of the key areas in renewable energy research with remarkable progress made every year. Here we consider the case of a photoactive material and study its structural composition and the resulting consequences for the fundamental processes driving solar energy conversion. A multiscale approach is used to characterize essential molecular properties of the light-absorbing layer. A selection of bulk-representative pairs of donor/acceptor molecules is extracted from the molecular dynamics simulation of the bulk heterojunction and analyzed at increasing levels of detail. Significantly increased ground state energies together with an array of additional structural characteristics are identified that all point towards an auxiliary role of the material's structural organization in mediating charge-transfer and -separation. Mechanistic studies of the type presented here can provide important insights into fundamental principles governing solar energy conversion in next-generation photovoltaic devices.

  7. ANFO bulk loading in coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Gajjar, A.

    1987-08-01

    With India's total coal production projected to increase from 152 to 237 million tons by 1990, net additional production from new mines must be more because of substantial depletion in existing mines. This article discusses the best possible application of explosive techniques in open-cast coal mines to economize production cost. The most energy-efficient and safest explosive is ANFO (ammonium nitrate, fuel oil); however, manual charging by INFO is not possible. Therefore, the solution is the application of bulk-loading systems of ANFO for giant mining operations. Cost of blasting per ton of coal production in India is in the range of Rs 25. Thus, the author suggests it will be the responsibility of mining engineers to see that the ANFO based bulk-loading system is implemented and the cost of production per ton reduced to Rs 19.50.

  8. Nonlinear AC susceptibility, surface and bulk shielding

    Science.gov (United States)

    van der Beek, C. J.; Indenbom, M. V.; D'Anna, G.; Benoit, W.

    1996-02-01

    We calculate the nonlinear AC response of a thin superconducting strip in perpendicular field, shielded by an edge current due to the geometrical barrier. A comparison with the results for infinite samples in parallel field, screened by a surface barrier, and with those for screening by a bulk current in the critical state, shows that the AC response due to a barrier has general features that are independent of geometry, and that are significantly different from those for screening by a bulk current in the critical state. By consequence, the nonlinear (global) AC susceptibility can be used to determine the origin of magnetic irreversibility. A comparison with experiments on a Bi 2Sr 2CaCu 2O 8+δ crystal shows that in this material, the low-frequency AC screening at high temperature is mainly due to the screening by an edge current, and that this is the unique source of the nonlinear magnetic response at temperatures above 40 K.

  9. Multilayer Integrated Film Bulk Acoustic Resonators

    CERN Document Server

    Zhang, Yafei

    2013-01-01

    Multilayer Integrated Film Bulk Acoustic Resonators mainly introduces the theory, design, fabrication technology and application of a recently developed new type of device, multilayer integrated film bulk acoustic resonators, at the micro and nano scale involving microelectronic devices, integrated circuits, optical devices, sensors and actuators, acoustic resonators, micro-nano manufacturing, multilayer integration, device theory and design principles, etc. These devices can work at very high frequencies by using the newly developed theory, design, and fabrication technology of nano and micro devices. Readers in fields of IC, electronic devices, sensors, materials, and films etc. will benefit from this book by learning the detailed fundamentals and potential applications of these advanced devices. Prof. Yafei Zhang is the director of the Ministry of Education’s Key Laboratory for Thin Films and Microfabrication Technology, PRC; Dr. Da Chen was a PhD student in Prof. Yafei Zhang’s research group.

  10. Internal shear cracking in bulk metal forming

    DEFF Research Database (Denmark)

    Christiansen, Peter; Nielsen, Chris Valentin; Bay, Niels Oluf

    2017-01-01

    This paper presents an uncoupled ductile damage criterion for modelling the opening and propagation of internal shear cracks in bulk metal forming. The criterion is built upon the original work on the motion of a hole subjected to shear with superimposed tensile stress triaxiality and its overall...... performance is evaluated by means of side-pressing formability tests in Aluminium AA2007-T6 subjected to different levels of pre-strain. Results show that the new proposed criterionis able to combine simplicity with efficiency for predicting the onset of fracture and the crack propagation path for the entire...... cracking to internal cracks formed undert hree-dimensional states of stress that are typical of bulk metal forming....

  11. Induction detection of concealed bulk banknotes

    International Nuclear Information System (INIS)

    Fuller, Christopher; Chen, Antao

    2011-01-01

    Bulk cash smuggling is a serious issue that has grown in volume in recent years. By building on the magnetic characteristics of paper currency, induction sensing is found to be capable of quickly detecting large masses of banknotes. The results show that this method is effective in detecting bulk cash through concealing materials such as plastics, cardboards, fabrics and aluminum foil. The significant difference in the observed phase between the received signals caused by conducting materials and ferrite compounds, found in banknotes, provides a good indication that this process can overcome the interference by metal objects in a real sensing application. This identification strategy has the potential to not only detect the presence of banknotes, but also the number, while still eliminating false positives caused by metal objects

  12. Induction detection of concealed bulk banknotes

    Science.gov (United States)

    Fuller, Christopher; Chen, Antao

    2012-06-01

    The smuggling of bulk cash across borders is a serious issue that has increased in recent years. In an effort to curb the illegal transport of large numbers of paper bills, a detection scheme has been developed, based on the magnetic characteristics of bank notes. The results show that volumes of paper currency can be detected through common concealing materials such as plastics, cardboard, and fabrics making it a possible potential addition to border security methods. The detection scheme holds the potential of also reducing or eliminating false positives caused by metallic materials found in the vicinity, by observing the stark difference in received signals caused by metal and currency. The detection scheme holds the potential to detect for both the presence and number of concealed bulk notes, while maintaining the ability to reduce false positives caused by metal objects.

  13. Bulk viscous cosmology with causal transport theory

    International Nuclear Information System (INIS)

    Piattella, Oliver F.; Fabris, Júlio C.; Zimdahl, Winfried

    2011-01-01

    We consider cosmological scenarios originating from a single imperfect fluid with bulk viscosity and apply Eckart's and both the full and the truncated Müller-Israel-Stewart's theories as descriptions of the non-equilibrium processes. Our principal objective is to investigate if the dynamical properties of Dark Matter and Dark Energy can be described by a single viscous fluid and how such description changes when a causal theory (Müller-Israel-Stewart's, both in its full and truncated forms) is taken into account instead of Eckart's non-causal one. To this purpose, we find numerical solutions for the gravitational potential and compare its behaviour with the corresponding ΛCDM case. Eckart's and the full causal theory seem to be disfavoured, whereas the truncated theory leads to results similar to those of the ΛCDM model for a bulk viscous speed in the interval 10 −11 || cb 2 ∼ −8

  14. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  15. Raman characterization of bulk ferromagnetic nanostructured graphite

    International Nuclear Information System (INIS)

    Pardo, Helena; Divine Khan, Ngwashi; Faccio, Ricardo; Araújo-Moreira, F.M.; Fernández-Werner, Luciana

    2012-01-01

    Raman spectroscopy was used to characterize bulk ferromagnetic graphite samples prepared by controlled oxidation of commercial pristine graphite powder. The G:D band intensity ratio, the shape and position of the 2D band and the presence of a band around 2950 cm -1 showed a high degree of disorder in the modified graphite sample, with a significant presence of exposed edges of graphitic planes as well as a high degree of attached hydrogen atoms.

  16. Depositing bulk or micro-scale electrodes

    Science.gov (United States)

    Shah, Kedar G.; Pannu, Satinderpall S.; Tolosa, Vanessa; Tooker, Angela C.; Sheth, Heeral J.; Felix, Sarah H.; Delima, Terri L.

    2016-11-01

    Thicker electrodes are provided on microelectronic device using thermo-compression bonding. A thin-film electrical conducting layer forms electrical conduits and bulk depositing provides an electrode layer on the thin-film electrical conducting layer. An insulating polymer layer encapsulates the electrically thin-film electrical conducting layer and the electrode layer. Some of the insulating layer is removed to expose the electrode layer.

  17. Theory of thermal expansivity and bulk modulus

    International Nuclear Information System (INIS)

    Kumar, Munish

    2005-01-01

    The expression for thermal expansivity and bulk modulus, claimed by Shanker et al. to be new [Physica B 233 (1977) 78; 245 (1998) 190; J. Phys. Chem. Solids 59 (1998) 197] are compared with the theory of high pressure-high temperature reported by Kumar and coworkers. It is concluded that the Shanker formulation and the relations based on this are equal to the approach of Kumar et al. up to second order

  18. Depleted Bulk Heterojunction Colloidal Quantum Dot Photovoltaics

    KAUST Repository

    Barkhouse, D. Aaron R.

    2011-05-26

    The first solution-processed depleted bulk heterojunction colloidal quantum dot solar cells are presented. The architecture allows for high absorption with full depletion, thereby breaking the photon absorption/carrier extraction compromise inherent in planar devices. A record power conversion of 5.5% under simulated AM 1.5 illumination conditions is reported. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A bulk viscosity driven inflationary model

    International Nuclear Information System (INIS)

    Waga, I.; Falcao, R.C.; Chanda, R.

    1985-01-01

    Bulk viscosity associated with the production of heavy particles during the GUT phase transition can lead to exponential or 'generalized' inflation. The condition of inflation proposed is independent of the details of the phase transition and remains unaltered in presence of a cosmological constant. Such mechanism avoids the extreme supercooling and reheating needed in the usual inflationary models. The standard baryongenesis mechanism can be maintained. (Author) [pt

  20. Nowcasting daily minimum air and grass temperature

    Science.gov (United States)

    Savage, M. J.

    2016-02-01

    Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass

  1. Evidence for Bulk Ripplocations in Layered Solids

    Science.gov (United States)

    Gruber, Jacob; Lang, Andrew C.; Griggs, Justin; Taheri, Mitra L.; Tucker, Garritt J.; Barsoum, Michel W.

    2016-09-01

    Plastically anisotropic/layered solids are ubiquitous in nature and understanding how they deform is crucial in geology, nuclear engineering, microelectronics, among other fields. Recently, a new defect termed a ripplocation-best described as an atomic scale ripple-was proposed to explain deformation in two-dimensional solids. Herein, we leverage atomistic simulations of graphite to extend the ripplocation idea to bulk layered solids, and confirm that it is essentially a buckling phenomenon. In contrast to dislocations, bulk ripplocations have no Burgers vector and no polarity. In graphite, ripplocations are attracted to other ripplocations, both within the same, and on adjacent layers, the latter resulting in kink boundaries. Furthermore, we present transmission electron microscopy evidence consistent with the existence of bulk ripplocations in Ti3SiC2. Ripplocations are a topological imperative, as they allow atomic layers to glide relative to each other without breaking the in-plane bonds. A more complete understanding of their mechanics and behavior is critically important, and could profoundly influence our current understanding of how graphite, layered silicates, the MAX phases, and many other plastically anisotropic/layered solids, deform and accommodate strain.

  2. Accelerating universes driven by bulk particles

    International Nuclear Information System (INIS)

    Brito, F.A.; Cruz, F.F.; Oliveira, J.F.N.

    2005-01-01

    We consider our universe as a 3d domain wall embedded in a 5d dimensional Minkowski space-time. We address the problem of inflation and late time acceleration driven by bulk particles colliding with the 3d domain wall. The expansion of our universe is mainly related to these bulk particles. Since our universe tends to be permeated by a large number of isolated structures, as temperature diminishes with the expansion, we model our universe with a 3d domain wall with increasing internal structures. These structures could be unstable 2d domain walls evolving to fermi-balls which are candidates to cold dark matter. The momentum transfer of bulk particles colliding with the 3d domain wall is related to the reflection coefficient. We show a nontrivial dependence of the reflection coefficient with the number of internal dark matter structures inside the 3d domain wall. As the population of such structures increases the velocity of the domain wall expansion also increases. The expansion is exponential at early times and polynomial at late times. We connect this picture with string/M-theory by considering BPS 3d domain walls with structures which can appear through the bosonic sector of a five-dimensional supergravity theory

  3. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  4. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  5. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  6. Main Parameters Characterization of Bulk CMOS Cross-Like Hall Structures

    Directory of Open Access Journals (Sweden)

    Maria-Alexandra Paun

    2016-01-01

    Full Text Available A detailed analysis of the cross-like Hall cells integrated in regular bulk CMOS technological process is performed. To this purpose their main parameters have been evaluated. A three-dimensional physical model was employed in order to evaluate the structures. On this occasion, numerical information on the input resistance, Hall voltage, conduction current, and electrical potential distribution has been obtained. Experimental results for the absolute sensitivity, offset, and offset temperature drift have also been provided. A quadratic behavior of the residual offset with the temperature was obtained and the temperature points leading to the minimum offset for the three Hall cells were identified.

  7. Sodium Flux Growth of Bulk Gallium Nitride

    Science.gov (United States)

    Von Dollen, Paul Martin

    This dissertation focused on development of a novel apparatus and techniques for crystal growth of bulk gallium nitride (GaN) using the sodium flux method. Though several methods exist to produce bulk GaN, none have been commercialized on an industrial scale. The sodium flux method offers potentially lower cost production due to relatively mild process conditions while maintaining high crystal quality. But the current equipment and methods for sodium flux growth of bulk GaN are generally not amenable to large-scale crystal growth or in situ investigation of growth processes, which has hampered progress. A key task was to prevent sodium loss or migration from the sodium-gallium growth melt while permitting N2 gas to access the growing crystal, which was accomplished by implementing a reflux condensing stem along with a reusable sealed capsule. The reflux condensing stem also enabled direct monitoring and control of the melt temperature, which has not been previously reported for the sodium flux method. Molybdenum-based materials were identified from a corrosion study as candidates for direct containment of the corrosive sodium-gallium melt. Successful introduction of these materials allowed implementation of a crucible-free containment system, which improved process control and can potentially reduce crystal impurity levels. Using the new growth system, the (0001) Ga face (+c plane) growth rate was >50 mum/hr, which is the highest bulk GaN growth rate reported for the sodium flux method. Omega X-ray rocking curve (?-XRC) measurements indicated the presence of multiple grains, though full width at half maximum (FWHM) values for individual peaks were 1020 atoms/cm3, possibly due to reactor cleaning and handling procedures. This dissertation also introduced an in situ technique to correlate changes in N2 pressure with dissolution of nitrogen and precipitation of GaN from the sodium-gallium melt. Different stages of N2 pressure decay were identified and linked to

  8. Waste Isolation Pilot Plant No-Migration Variance Petition

    International Nuclear Information System (INIS)

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA section 3004(d) and 40 CFR section 268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA's NOD and met with the EPA's reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0

  9. Mean-Variance Portfolio Selection with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2013-01-01

    Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.

  10. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  11. Scale dependence in species turnover reflects variance in species occupancy.

    Science.gov (United States)

    McGlinn, Daniel J; Hurlbert, Allen H

    2012-02-01

    Patterns of species turnover may reflect the processes driving community dynamics across scales. While the majority of studies on species turnover have examined pairwise comparison metrics (e.g., the average Jaccard dissimilarity), it has been proposed that the species-area relationship (SAR) also offers insight into patterns of species turnover because these two patterns may be analytically linked. However, these previous links only apply in a special case where turnover is scale invariant, and we demonstrate across three different plant communities that over 90% of the pairwise turnover values are larger than expected based on scale-invariant predictions from the SAR. Furthermore, the degree of scale dependence in turnover was negatively related to the degree of variance in the occupancy frequency distribution (OFD). These findings suggest that species turnover diverges from scale invariance, and as such pairwise turnover and the slope of the SAR are not redundant. Furthermore, models developed to explain the OFD should be linked with those developed to explain species turnover to achieve a more unified understanding of community structure.

  12. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  13. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  14. A pattern recognition approach to transistor array parameter variance

    Science.gov (United States)

    da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.

    2018-06-01

    The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.

  15. Measurement of Minimum Bias Observables with ATLAS

    CERN Document Server

    Kvita, Jiri; The ATLAS collaboration

    2017-01-01

    The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.

  16. Comments on the 'minimum flux corona' concept

    International Nuclear Information System (INIS)

    Antiochos, S.K.; Underwood, J.H.

    1978-01-01

    Hearn's (1975) models of the energy balance and mass loss of stellar coronae, based on a 'minimum flux corona' concept, are critically examined. First, it is shown that the neglect of the relevant length scales for coronal temperature variation leads to an inconsistent computation of the total energy flux F. The stability arguments upon which the minimum flux concept is based are shown to be fallacious. Errors in the computation of the stellar wind contribution to the energy budget are identified. Finally we criticize Hearn's (1977) suggestion that the model, with a value of the thermal conductivity modified by the magnetic field, can explain the difference between solar coronal holes and quiet coronal regions. (orig.) 891 WL [de

  17. Minimum wakefield achievable by waveguide damped cavity

    International Nuclear Information System (INIS)

    Lin, X.E.; Kroll, N.M.

    1995-01-01

    The authors use an equivalent circuit to model a waveguide damped cavity. Both exponentially damped and persistent (decay t -3/2 ) components of the wakefield are derived from this model. The result shows that for a cavity with resonant frequency a fixed interval above waveguide cutoff, the persistent wakefield amplitude is inversely proportional to the external Q value of the damped mode. The competition of the two terms results in an optimal Q value, which gives a minimum wakefield as a function of the distance behind the source particle. The minimum wakefield increases when the resonant frequency approaches the waveguide cutoff. The results agree very well with computer simulation on a real cavity-waveguide system

  18. Protocol for the verification of minimum criteria

    International Nuclear Information System (INIS)

    Gaggiano, M.; Spiccia, P.; Gaetano Arnetta, P.

    2014-01-01

    This Protocol has been prepared with reference to the provisions of article 8 of the Legislative Decree of May 26, 2000 No. 187. Quality controls of radiological equipment fit within the larger 'quality assurance Program' and are intended to ensure the correct operation of the same and the maintenance of that State. The pursuit of this objective guarantees that the radiological equipment subjected to those controls also meets the minimum criteria of acceptability set out in annex V of the aforementioned legislative decree establishing the conditions necessary to allow the functions to which each radiological equipment was designed, built and for which it is used. The Protocol is established for the purpose of quality control of radiological equipment of Cone Beam Computer Tomography type and reference document, in the sense that compliance with stated tolerances also ensures the subsistence minimum acceptability requirements, where applicable.

  19. Low Streamflow Forcasting using Minimum Relative Entropy

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  20. Characterization of phosphorus species in sediments from the Arabian Sea oxygen minimum zone: Combining sequential extractions and X-ray spectroscopy

    NARCIS (Netherlands)

    Kraal, Peter; Bostick, Benjamin C.; Behrends, Thilo; Reichart, Gert-Jan; Slomp, Caroline P.

    2015-01-01

    The bulk phosphorus (P) distribution in sediment samples from the oxygen minimum zone of the northern Arabian Sea was determined using two methods: sequential chemical extraction (the ‘SEDEX’ procedure) and X-ray absorption near-edge structure (XANES) spectroscopy of the phosphorus K-edge. Our