WorldWideScience

Sample records for standardized minimum variance

  1. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  2. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  3. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  4. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  5. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  6. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  7. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  8. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  9. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  10. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  11. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  12. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  13. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  14. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Louis A; Mason, John J.

    2018-04-01

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, the problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.

  15. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  16. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  17. An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound

    DEFF Research Database (Denmark)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt

    2016-01-01

    Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...

  18. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  19. 12 CFR 564.4 - Minimum appraisal standards.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...

  20. 77 FR 43196 - Minimum Internal Control Standards and Technical Standards

    Science.gov (United States)

    2012-07-24

    ... NATIONAL INDIAN GAMING COMMISSION 25 CFR Parts 543 and 547 Minimum Internal Control Standards [email protected] . SUPPLEMENTARY INFORMATION: Part 543 addresses minimum internal control standards (MICS) for Class II gaming operations. The regulations require tribes to establish controls and implement...

  1. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  2. 5 CFR 551.601 - Minimum age standards.

    Science.gov (United States)

    2010-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year... subject to its child labor provisions, with certain exceptions not applicable here. (b) 18-year minimum... occupation found and declared by the Secretary of Labor to be particularly hazardous for the employment of...

  3. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  4. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  5. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  6. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  7. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...

  8. Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2017-01-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...

  9. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  10. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  11. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. A phantom study on temporal and subband Minimum Variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    This paper compares experimentally temporal and subband implementations of the Minimum Variance (MV) adaptive beamformer for medical ultrasound imaging. The performance of the two approaches is tested by comparing wire phantom measurements, obtained by the research ultrasound scanner SARUS. A 7 MHz...... BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...... from conventional Delay-and-Sum (DAS) beamformer. FWHM measured at the depth of 46.6 mm, is 0.02 mm (0.09λ) for both adaptive methods while the corresponding values for Hanning and Boxcar weights are 0.64 and 0.44 mm respectively. Between the MV beamformers a -2 dB difference in PSL is noticed in favor...

  13. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  14. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...

  15. Minimum reporting standards for clinical research on groin pain in athletes

    DEFF Research Database (Denmark)

    Delahunt, Eamonn; Thorborg, Kristian; Khan, Karim M

    2015-01-01

    Groin pain in athletes is a priority area for sports physiotherapy and sports medicine research. Heterogeneous studies with low methodological quality dominate research related to groin pain in athletes. Low-quality studies undermine the external validity of research findings and limit the ability...... to generalise findings to the target patient population. Minimum reporting standards for research on groin pain in athletes are overdue. We propose a set of minimum reporting standards based on best available evidence to be utilised in future research on groin pain in athletes. Minimum reporting standards...... are provided in relation to: (1) study methodology, (2) study participants and injury history, (3) clinical examination, (4) clinical assessment and (5) radiology. Adherence to these minimum reporting standards will strengthen the quality and transparency of research conducted on groin pain in athletes...

  16. Impact of HIPAA's minimum necessary standard on genomic data sharing.

    Science.gov (United States)

    Evans, Barbara J; Jarvik, Gail P

    2018-04-01

    This article provides a brief introduction to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule's minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. This research used the Thomson Reuters Westlaw database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data sharing. We considered relevant example cases of genomic data-sharing needs. In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized-whether for research, public health, or clinical interpretation and medical practice support-affects how the minimum necessary standard applies and its overall impact on data access and use. There is no clear regulatory guidance on how to apply HIPAA's minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows.

  17. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  18. 25 CFR 542.7 - What are the minimum internal control standards for bingo?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for bingo... SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.7 What are the minimum internal control standards for... utilized, alternate documentation and/or procedures that provide at least the level of control described by...

  19. Effects of Important Parameters Variations on Computing Eigenspace-Based Minimum Variance Weights for Ultrasound Tissue Harmonic Imaging

    OpenAIRE

    Heidari, Mehdi Haji; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-01-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signa...

  20. 25 CFR 542.14 - What are the minimum internal control standards for the cage?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for the... SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for... and/or procedures that provide at least the level of control described by the standards in this...

  1. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for pull... SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for... and/or procedures that provide at least the level of control described by the standards in this...

  2. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  3. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...... ultrasound scanner for the data acquisition. The lateral resolution and the contrast obtained, are evaluated and compared with those from the conventional Delay-and-Sum (DAS) beamformer and the MV temporal implementation (MVT). From the wire phantom the Full-Width-at-Half-Maximum (FWHM) measured at a depth...

  4. Minimum quality standards and international trade

    DEFF Research Database (Denmark)

    Baltzer, Kenneth Thomas

    2011-01-01

    This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which a domes...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....

  5. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    Science.gov (United States)

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  6. Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI)

    Science.gov (United States)

    Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.

    2013-01-01

    There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616

  7. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for pari... INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.11 What are the minimum internal control... documentation and/or procedures that provide at least the level of control described by the standards in this...

  8. The Need for Higher Minimum Staffing Standards in U.S. Nursing Homes

    Science.gov (United States)

    Harrington, Charlene; Schnelle, John F.; McGregor, Margaret; Simmons, Sandra F.

    2016-01-01

    Many U.S. nursing homes have serious quality problems, in part, because of inadequate levels of nurse staffing. This commentary focuses on two issues. First, there is a need for higher minimum nurse staffing standards for U.S. nursing homes based on multiple research studies showing a positive relationship between nursing home quality and staffing and the benefits of implementing higher minimum staffing standards. Studies have identified the minimum staffing levels necessary to provide care consistent with the federal regulations, but many U.S. facilities have dangerously low staffing. Second, the barriers to staffing reform are discussed. These include economic concerns about costs and a focus on financial incentives. The enforcement of existing staffing standards has been weak, and strong nursing home industry political opposition has limited efforts to establish higher standards. Researchers should study the ways to improve staffing standards and new payment, regulatory, and political strategies to improve nursing home staffing and quality. PMID:27103819

  9. 48 CFR 22.1002-4 - Application of the Fair Labor Standards Act minimum wage.

    Science.gov (United States)

    2010-10-01

    ... Labor Standards Act minimum wage. 22.1002-4 Section 22.1002-4 Federal Acquisition Regulations System... its employees working on the contract less than the minimum wage specified in section 6(a)(1) of the... Service Contract Act of 1965, as Amended 22.1002-4 Application of the Fair Labor Standards Act minimum...

  10. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  11. 25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary services or items? (a) Each Tribal gaming regulatory authority or...

  12. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  13. Output Power Control of Wind Turbine Generator by Pitch Angle Control using Minimum Variance Control

    Science.gov (United States)

    Senjyu, Tomonobu; Sakamoto, Ryosei; Urasaki, Naomitsu; Higa, Hiroki; Uezato, Katsumi; Funabashi, Toshihisa

    In recent years, there have been problems such as exhaustion of fossil fuels, e. g., coal and oil, and environmental pollution resulting from consumption. Effective utilization of renewable energies such as wind energy is expected instead of the fossil fuel. Wind energy is not constant and windmill output is proportional to the cube of wind speed, which cause the generated power of wind turbine generators (WTGs) to fluctuate. In order to reduce fluctuating components, there is a method to control pitch angle of blades of the windmill. In this paper, output power leveling of wind turbine generator by pitch angle control using an adaptive control is proposed. A self-tuning regulator is used in adaptive control. The control input is determined by the minimum variance control. It is possible to compensate control input to alleviate generating power fluctuation with using proposed controller. The simulation results with using actual detailed model for wind power system show effectiveness of the proposed controller.

  14. 25 CFR 547.15 - What are the minimum technical standards for electronic data communications between system...

    Science.gov (United States)

    2010-04-01

    ... communications between system components? This section provides minimum standards for electronic data... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum technical standards for electronic data communications between system components? 547.15 Section 547.15 Indians NATIONAL INDIAN GAMING...

  15. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  16. 25 CFR 542.4 - How do these regulations affect minimum internal control standards established in a Tribal-State...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How do these regulations affect minimum internal control... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal control standards established in a Tribal-State compact? (a) If there is a...

  17. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. 40 CFR 260.31 - Standards and criteria for variances from classification as a solid waste.

    Science.gov (United States)

    2010-07-01

    ... from classification as a solid waste. 260.31 Section 260.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.31 Standards and criteria for variances from classification as a solid waste. (a) The...

  19. 25 CFR 12.31 - Are there any minimum employment standards for Indian country law enforcement personnel?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Are there any minimum employment standards for Indian country law enforcement personnel? 12.31 Section 12.31 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Qualifications and Training Requirements § 12.31 Are there any minimum employment standards...

  20. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  1. 25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.

    Science.gov (United States)

    2010-04-01

    ..., physical education, music, etc.) which are directly related to or affect student instruction shall provide....20 Section 36.20 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY...

  2. 76 FR 53817 - Minimum Internal Control Standards for Class II Gaming

    Science.gov (United States)

    2011-08-30

    ... DEPARTMENT OF THE INTERIOR National Indian Gaming Commission 25 CFR Parts 542 and 543 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission, Interior. ACTION: Final rule; delay of effective date and request for comments. SUMMARY: The National Indian Gaming...

  3. 77 FR 60625 - Minimum Internal Control Standards for Class II Gaming

    Science.gov (United States)

    2012-10-04

    ... DEPARTMENT OF THE INTERIOR National Indian Gaming Commission 25 CFR Parts 542 and 543 RIN 3141-AA-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission. ACTION: Final rule; delay of effective date; suspension. SUMMARY: The National Indian Gaming Commission...

  4. A comparison between temporal and subband minimum variance adaptive beamforming

    Science.gov (United States)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar

  5. 75 FR 55269 - Minimum Internal Control Standards for Class II Gaming

    Science.gov (United States)

    2010-09-10

    ... DEPARTMENT OF THE INTERIOR National Indian Gaming Commission 25 CFR Parts 542 and 543 RIN 3141-AA-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission. ACTION: Delay of effective date of final rule; request for comments. SUMMARY: The National Indian Gaming...

  6. Disasters And Minimum Health Standards In Disaster Response

    Directory of Open Access Journals (Sweden)

    Sibel GOGEN

    Full Text Available Millions of people are affected by natural or man made disasters all over the world. The number of people affected by disasters increase globally, due to global climate changes, increasing poverty, low life standards, inappropriate infrastructure, lack of early response systems, abuse of natural sources, and beside these, nuclear weapons, wars and conflicts, terrorist actions, migration, displacement and population movements. 95 % of life loss due to disasters are in the underdeveloped or developing countries. Turkey is a developing country, highly affected by disasters. For coping with disasters, not only national action plans, but also International Action Plans and cooperations are needed. Since all the disasters have direct and indirect effects on health, applications of minimal health standarts in disaster response, will reduce the morbidity and mortality rates. In this paper, water supplies and sanitation, vector control, waste control, burial of corpses, nutrition and minimum health standards in disaster response, are reviewed. [TAF Prev Med Bull 2004; 3(12.000: 296-306

  7. 38 CFR 17.155 - Minimum standards of safety and quality for automotive adaptive equipment.

    Science.gov (United States)

    2010-07-01

    ... safety and quality for automotive adaptive equipment. 17.155 Section 17.155 Pensions, Bonuses, and... Minimum standards of safety and quality for automotive adaptive equipment. (a) The Under Secretary for... officials that it meets implicit standards of safety and quality adopted by the industry or as later...

  8. 78 FR 2797 - Federal Motor Vehicle Safety Standards; Minimum Sound Requirements for Hybrid and Electric Vehicles

    Science.gov (United States)

    2013-01-14

    ... Sound Requirements for Hybrid and Electric Vehicles; Draft Environmental Assessment for Rulemaking To Establish Minimum Sound Requirements for Hybrid and Electric Vehicles; Proposed Rules #0;#0;Federal Register...-0148] RIN 2127-AK93 Federal Motor Vehicle Safety Standards; Minimum Sound Requirements for Hybrid and...

  9. Performance Measurement Implementation Of Minimum Service Standards For Basic Education Based On The Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Budiman Rusli

    2015-08-01

    Full Text Available Policies Minimum Service Standards for Basic Education has rolled out since 2002 by the minister in accordance with the Decree No. 129a U 2004 About Minimum Service Standards Education is continually updated and lastly Regulation of the Minister of Education and Culture No. 23 of 2013. All of the district government town should achieve the target of achieving 100 per cent in each of the indicators listed in the minimum service standards for the end of 2014. achievement pad on each indicator is just one measure of the performance of the local government department of education. Unfortunately from the announced target for 27 indicators that exist almost all regions including local governments do not reach Tangerang Regency. It is necessary for measuring the performance of local authorities particularly the education department. One performance measure modern enough that measurements can be done that The Balance Scorecard BSc. In the Balanced Scorecard is a management tool contemporare complete measure company performance not only of the financial perspective but also non-financial performance such as Customer Perspective Internal Business Processes and Learning and Growth. This approach is actually ideally suited for multinational companies because this approach requires very expensive but can be used to measure the profit performance of the company in addition to the combination of a long-term strategic and short-strategic. Balanced Scorecard it can also be done in measuring the performance of public sector services as well by modifying a few things so it can be used to measure the performance of the public sector including the Performance Measurement Minimum Service Standards for Basic Education.

  10. Projected electricity savings from implementing minimum energy efficiency standard for household refrigerators in Malaysia

    International Nuclear Information System (INIS)

    Mahlia, T.M.I.; Masjuki, H.H.; Saidur, R.; Choudhury, I.A.; NoorLeha, A.R.

    2003-01-01

    The Malaysian economy has grown rapidly in the last two decades. This growth has increased the ownership of household electrical appliances, especially refrigerator-freezers. Almost every house in Malaysia owns a refrigerator-freezer. The Malaysia Energy Center considered implementing a minimum energy efficiency standard for household refrigerator-freezers sometime in the coming year. This paper attempts to predict the amount of energy savings in the residential sector by implementing a minimum energy efficiency standard for household refrigerator-freezers. The calculations are based on the growth of refrigerator-freezer ownership data in Malaysian households. By implementing the programs in 2004, about 8722 GWh will be saved in the year 2013. Therefore, efficiency improvement of this appliance will provide a significant impact in future electricity consumption in Malaysia

  11. Minimum Information about T Regulatory Cells: A Step toward Reproducibility and Standardization

    Directory of Open Access Journals (Sweden)

    Anke Fuchs

    2018-01-01

    Full Text Available Cellular therapies with CD4+ T regulatory cells (Tregs hold promise of efficacious treatment for the variety of autoimmune and allergic diseases as well as posttransplant complications. Nevertheless, current manufacturing of Tregs as a cellular medicinal product varies between different laboratories, which in turn hampers precise comparisons of the results between the studies performed. While the number of clinical trials testing Tregs is already substantial, it seems to be crucial to provide some standardized characteristics of Treg products in order to minimize the problem. We have previously developed reporting guidelines called minimum information about tolerogenic antigen-presenting cells, which allows the comparison between different preparations of tolerance-inducing antigen-presenting cells. Having this experience, here we describe another minimum information about Tregs (MITREG. It is important to note that MITREG does not dictate how investigators should generate or characterize Tregs, but it does require investigators to report their Treg data in a consistent and transparent manner. We hope this will, therefore, be a useful tool facilitating standardized reporting on the manufacturing of Tregs, either for research purposes or for clinical application. This way MITREG might also be an important step toward more standardized and reproducible testing of the Tregs preparations in clinical applications.

  12. The Implementation of Minimum Service Standards (MMS on Public Service for Health Services Sector in Bondowoso, Indonesia

    Directory of Open Access Journals (Sweden)

    Untung Kuzairi

    2018-04-01

    Full Text Available One of heatlh policies implemented by the hospital is the minimum service standards (MSS. MSS is a benchmark of hospital service quality in providing services to the public. Talking about health service quality problem, it was found out as the field fact that the achievement of MSS indicator in General Hospital of Dr. H. Koesnadi Bondowoso, Indonesia in 2016 still did not fulfill target of standard hospital service (type B and  minimum service standard (MSS of hospital. This fact shows that the quality of health services in general hospital of  dr. H. Koesnadi Bondowoso is still low. So, this research aims to describe the policy implementation of minimum service standard and to analyze the obstacles in the implementation of MSS policy at general hospital of  dr. H. Koesnadi Bondowoso. So, this research would discuss the policy implementation of minimum service standards by using Edward III concept as a tool to analyze it. This research employed qualitative research with phenomenological approach. The results showed that the implementation of MSS policy of dr. H. Koesnadi Bondowoso general hospital still did not run well. This was due to several factors  such as communication, bureaucratic structure, sources, dispositions (attitude and leadership in sectoral ego control. Sectoral ego can be shaped from educational background of specialist doctors who still adhered seniority and lack of individual role of implementor in building interpersonal communication and conflict management.

  13. 25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?

    Science.gov (United States)

    2010-04-01

    ..., and rules? 547.16 Section 547.16 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR... § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides standards for the display of game artwork, the displays on belly or top glass, and the display and...

  14. Setting a national minimum standard for health benefits: how do state benefit mandates compare with benefits in large-group plans?

    Science.gov (United States)

    Frey, Allison; Mika, Stephanie; Nuzum, Rachel; Schoen, Cathy

    2009-06-01

    Many proposed health insurance reforms would establish a federal minimum benefit standard--a baseline set of benefits to ensure that people have adequate coverage and financial protection when they purchase insurance. Currently, benefit mandates are set at the state level; these vary greatly across states and generally target specific areas rather than set an overall standard for what qualifies as health insurance. This issue brief considers what a broad federal minimum standard might look like by comparing existing state benefit mandates with the services and providers covered under the Federal Employees Health Benefits Program (FEHBP) Blue Cross and Blue Shield standard benefit package, an example of minimum creditable coverage that reflects current standard practice among employer-sponsored health plans. With few exceptions, benefits in the FEHBP standard option either meet or exceed those that state mandates require-indicating that a broad-based national benefit standard would include most existing state benefit mandates.

  15. A Study of the Causes of Man-Hour Variance of Naval Shipyard Work Standards (The National Shipbuilding Research Program)

    National Research Council Canada - National Science Library

    Bunch, Howard M

    1989-01-01

    This paper is a presentation of the results of a study conducted at a U.S. Navy shipyard during 1987 concerning the relationship between engineering standards and the variances that were occurring in production budget and charged manhours...

  16. Creation of minimum standard tool for palliative care in India and self-evaluation of palliative care programs using it

    Directory of Open Access Journals (Sweden)

    M R Rajagopal

    2014-01-01

    Full Text Available Background: It is important to ensure that minimum standards for palliative care based on available resources are clearly defined and achieved. Aims: (1 Creation of minimum National Standards for Palliative Care for India. (2 Development of a tool for self-evaluation of palliative care organizations. (3 Evaluation of the tool in India. In 2006, Pallium India assembled a working group at the national level to develop minimum standards. The standards were to be evaluated by palliative care services in the country. Materials and Methods: The working group prepared a "standards" document, which had two parts - the first composed of eight "essential" components and the second, 22 "desirable" components. The working group sent the document to 86 hospice and palliative care providers nationwide, requesting them to self-evaluate their palliative care services based on the standards document, on a modified Likert scale. Results: Forty-nine (57% palliative care organizations responded, and their self-evaluation of services based on the standards tool was analyzed. The majority of the palliative care providers met most of the standards identified as essential by the working group. A variable percentage of organizations had satisfied the desirable components of the standards. Conclusions: We demonstrated that the "standards tool" could be applied effectively in practice for self-evaluation of quality of palliative care services.

  17. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  18. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  19. Cost-benefit analysis of implementing minimum energy efficiency standards for household refrigerator-freezers in Malaysia

    International Nuclear Information System (INIS)

    Mahlia, T.M.I.; Masjuki, H.H.; Saidur, R.; Amalina, M.A.

    2004-01-01

    The ownership of household electrical appliances especially refrigerator-freezer has increased rapidly in Malaysia. Almost every household in this country has a refrigerator-freezer. To reduce energy consumption in this sector the refrigerator is one of the top priorities of the energy efficiency program for household appliances. Malaysian authority is considering implementing minimum energy efficiency standards for refrigerator-freezer sometime in the coming year. This paper attempts to analyze cost-benefit of implementing minimum energy efficiency standards for household refrigerator-freezers in Malaysia. The calculations were made based on growth of ownership data for refrigerators in Malaysian households. The number of refrigerator-freezer has increased from 175,842 units in 1970 to 4,196,486 in 2000 and it will be about 11,293,043 in the year of 2020. Meanwhile it has accounted for about 26.3% of electricity consumption in a single household. Therefore, efficiency improvement of this appliance will give a significant impact in the future of electricity consumption in this country. Furthermore, it has been found that implementing an energy efficiency standard for household refrigerator-freezers is economically justified

  20. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  1. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to

  2. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  3. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  4. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  5. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  6. 77 FR 32444 - Minimum Internal Control Standards

    Science.gov (United States)

    2012-06-01

    ... definitions, add and amend existing definitions; amend the term ``variance'' as it applies to establishing an... and audit and accounting procedures into their respective sections. DATES: Submit comments on or... Commission agrees that it does not intend to limit the definition of charitable organizations to those with a...

  7. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  8. 25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?

    Science.gov (United States)

    2010-04-01

    ... random number generation? 547.14 Section 547.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation...) Unpredictability; and (3) Non-repeatability. (b) Statistical Randomness.(1) Numbers produced by an RNG shall be...

  9. 25 CFR 547.12 - What are the minimum technical standards for downloading on a Class II gaming system?

    Science.gov (United States)

    2010-04-01

    ... on a Class II gaming system? 547.12 Section 547.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY... gaming system? This section provides standards for downloading on a Class II gaming system. (a) Downloads...

  10. 9 CFR 354.210 - Minimum standards for sanitation, facilities, and operating procedures in official plants.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Minimum standards for sanitation, facilities, and operating procedures in official plants. 354.210 Section 354.210 Animals and Animal Products... sanitation, facilities, and operating procedures in official plants. The provisions of §§ 354.210 to 354.247...

  11. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  12. 25 CFR 547.11 - What are the minimum technical standards for money and credit handling?

    Science.gov (United States)

    2010-04-01

    ... GAMES § 547.11 What are the minimum technical standards for money and credit handling? This section... interface is: (i) Involved in the play of a game; (ii) In audit mode, recall mode or any test mode; (iii...) For machine-readable vouchers and coupons, a bar code or other form of machine readable representation...

  13. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  14. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  15. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  16. 29 CFR 4.2 - Payment of minimum wage specified in section 6(a)(1) of the Fair Labor Standards Act of 1938...

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Payment of minimum wage specified in section 6(a)(1) of the... and Procedures § 4.2 Payment of minimum wage specified in section 6(a)(1) of the Fair Labor Standards... employees shall pay any employees engaged in such work less than the minimum wage specified in section 6(a...

  17. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  18. Electricity savings from implementation of minimum energy efficiency standard for TVs in Malaysia

    Energy Technology Data Exchange (ETDEWEB)

    Varman, M.; Masjuki, H.H.; Mahlia, T.M.I. [University of Malaya, Kuala Lumpur (Malaysia). Department of Mechanical Engineering

    2005-06-01

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD in Malaysia are poised to have a large impact on overall TV electricity consumption in the country. With the increasing of overall TV energy consumption, energy efficiency standards are one of highly effective policies for decreasing electricity consumption in the residential sector. Energy efficiency standards are also capable of reducing consumer's electricity bill and contribute towards positive environmental impacts. This paper attempts to predict the amount of energy that can be saved in the residential sector by implementing minimum energy efficiency standard for television sets in Malaysia. Over the past 30 years, television ownership in Malaysian residents has increased from 186,036 units in 1970 to 2,741,640 units in 1991. This figure is expected to reach 6,201,316 units in the year 2010. Hence, efficiency improvement for this appliance will have a significant impact on the future of electricity consumption in this country. (author)

  19. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  20. Obtaining variances from the treatment standards of the RCRA Land Disposal Restrictions

    International Nuclear Information System (INIS)

    1990-05-01

    The Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs) [40 CFR 268] impose specific requirements for treatment of RCRA hazardous wastes prior to disposal. Before the LDRs, many hazardous wastes could be land disposed at an appropriately designed and permitted facility without undergoing treatment. Thus, the LDRs constitute a major change in the regulations governing hazardous waste. EPA does not regulate the radioactive component of radioactive mixed waste (RMW). However, the hazardous waste component of an RMW is subject to RCRA LDR regulations. DOE facilities that manage hazardous wastes (including radioactive mixed wastes) may have to alter their waste-management practices to comply with the regulations. The purpose of this document is to aid DOE facilities and operations offices in determining (1) whether a variance from the treatment standard should be sought and (2) which type (treatability or equivalency) of petition is appropriate. The document also guides the user in preparing the petition. It shall be noted that the primary responsibility for the development of the treatability petition lies with the generator of the waste. 2 figs., 1 tab

  1. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  2. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  3. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  4. 25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?

    Science.gov (United States)

    2010-04-01

    ...: Event Definition and action to be taken (i) Player interface power off during play This condition is reported by the affected component(s) to indicate power has been lost during game play. (ii) Player... INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II...

  5. A method for minimum risk portfolio optimization under hybrid uncertainty

    Science.gov (United States)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  6. 25 CFR 547.9 - What are the minimum technical standards for Class II gaming system accounting functions?

    Science.gov (United States)

    2010-04-01

    ... gaming system accounting functions? 547.9 Section 547.9 Indians NATIONAL INDIAN GAMING COMMISSION... accounting functions? This section provides standards for accounting functions used in Class II gaming systems. (a) Required accounting data.The following minimum accounting data, however named, shall be...

  7. [Falling Short of Minimum Volume Standards, Exemptions and Their Consequences from 2018 Onwards. Complex Procedures on Oesophagus and Pancreas in German Hospitals from 2006 to 2014].

    Science.gov (United States)

    de Cruppé, Werner; Geraedts, Max

    2018-03-16

    The minimum volume standards for hospitals in Germany, in force since 2004, provide four exemptions for non-complying hospitals. This study investigates the extent and importance of these exemptions for complex procedures on the oesophagus and pancreas for all non-complying hospitals and for the revised minimum volume regulations in force since the beginning of 2018. Longitudinal, descriptive analyses of data on minimum volume standards and their exemptions for complex procedures on the oesophagus and pancreas, as presented by the hospital quality report cards of the reporting years from 2006 to 2014. For each year and both procedures, about 120 hospitals with some 500 cases report non-compliance with the minimum volume standards. Of these a third report no exemptions (with 180 procedures), a third state emergencies (110), and another third report exemptions due to internal hospital restructuring (210). Ensuring geographical access to care as an exemption is of no importance. After the three year exemption period for installation of a new service line, 20% of the hospitals with procedures on the oesophagus and 30% on the pancreas complied with the minimum volume standards. After the two-year period for staff realignment, the figures were 40 and 50%, respectively. Exemptions do not entirely explain all procedures performed by hospitals not complying with the minimum volume standards. The revised minimum volume regulations' restructuring of exemptions to "emergencies" and "new or renewed service lines" with a two year exemption period, are concordant with the empirical findings of this study. Georg Thieme Verlag KG Stuttgart · New York.

  8. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

  9. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  10. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  11. Cost of and soil loss on "minimum-standard" forest truck roads constructed in the central Appalachians

    Science.gov (United States)

    J. N. Kochenderfer; G. W. Wendel; H. Clay Smith

    1984-01-01

    A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...

  12. 25 CFR 547.7 - What are the minimum technical hardware standards applicable to Class II gaming systems?

    Science.gov (United States)

    2010-04-01

    ... applicable to Class II gaming systems? 547.7 Section 547.7 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY... gaming systems? (a) General requirements. (1) The Class II gaming system shall operate in compliance with...

  13. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  14. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  15. [Physical medicine in hospital. Minimum standards in a physical medical department in acute inpatient areas in rheumatology].

    Science.gov (United States)

    Reißhauer, A; Liebl, M E

    2012-07-01

    Standards for what should be available in terms of equipment and services in a department of physical medicine caring for acute inpatients do not exist in Germany. The profile of a department determines the therapeutic services it focuses on and hence the technical facilities required. The German catalogue of operations and procedures defines minimum thresholds for treatment. In the opinion of the authors a department caring for inpatients with acute rheumatic diseases must, as a minimum, have the facilities and equipment necessary for offering thermotherapeutic treatment. Staff trained in physical therapeutic procedures and occupational therapy is also crucial. Moreover, it is desirable that the staff should be trained in manual therapy.

  16. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  17. 29 CFR 570.2 - Minimum age standards.

    Science.gov (United States)

    2010-07-01

    ... and well-being (see subpart C of this part); and (ii) The Act sets an 18-year minimum age with respect... hazardous for the employment of minors of such age or detrimental to their health or well-being (see subpart... regulation or by order that the employment of employees between the ages of 14 and 16 years in occupations...

  18. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  19. 25 CFR 547.6 - What are the minimum technical standards for enrolling and enabling Class II gaming system...

    Science.gov (United States)

    2010-04-01

    ... and enabling Class II gaming system components? 547.6 Section 547.6 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED... enabling Class II gaming system components? (a) General requirements. Class II gaming systems shall provide...

  20. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  1. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  2. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  3. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  4. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  5. STUDY LINKS SOLVING THE MAXIMUM TASK OF LINEAR CONVOLUTION «EXPECTED RETURNS-VARIANCE» AND THE MINIMUM VARIANCE WITH RESTRICTIONS ON RETURNS

    Directory of Open Access Journals (Sweden)

    Maria S. Prokhorova

    2014-01-01

    Full Text Available The article deals with a study of problemsof finding the optimal portfolio securitiesusing convolutions expectation of portfolioreturns and portfolio variance. Value of thecoefficient of risk, in which the problem ofmaximizing the variance - limited yieldis equivalent to maximizing a linear convolution of criteria for «expected returns-variance» is obtained. An automated method for finding the optimal portfolio, onthe basis of which the results of the studydemonstrated is proposed.

  6. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  7. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  8. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  9. Analysis of Minimum Efficiency Performance Standards for Residential General Service Lighting in Chile

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie E.; McNeil, Michael A.; Leiva Ibanez, Francisco Humberto; Ruiz, Ana Maria; Pavon, Mariana; Hall, Stephen

    2011-06-01

    Minimum Efficiency Performance Standards (MEPS) have been chosen as part of Chile's national energy efficiency action plan. As a first MEPS, the Ministry of Energy has decided to focus on a regulation for lighting that would ban the sale of inefficient bulbs, effectively phasing out the use of incandescent lamps. Following major economies such as the US (EISA, 2007) , the EU (Ecodesign, 2009) and Australia (AS/NZS, 2008) who planned a phase out based on minimum efficacy requirements, the Ministry of Energy has undertaken the impact analysis of a MEPS on the residential lighting sector. Fundacion Chile (FC) and Lawrence Berkeley National Laboratory (LBNL) collaborated with the Ministry of Energy and the National Energy Efficiency Program (Programa Pais de Eficiencia Energetica, or PPEE) in order to produce a techno-economic analysis of this future policy measure. LBNL has developed for CLASP (CLASP, 2007) a spreadsheet tool called the Policy Analysis Modeling System (PAMS) that allows for evaluation of costs and benefits at the consumer level but also a wide range of impacts at the national level, such as energy savings, net present value of savings, greenhouse gas (CO2) emission reductions and avoided capacity generation due to a specific policy. Because historically Chile has followed European schemes in energy efficiency programs (test procedures, labelling program definitions), we take the Ecodesign commission regulation No 244/2009 as a starting point when defining our phase out program, which means a tiered phase out based on minimum efficacy per lumen category. The following data were collected in order to perform the techno-economic analysis: (1) Retail prices, efficiency and wattage category in the current market, (2) Usage data (hours of lamp use per day), and (3) Stock data, penetration of efficient lamps in the market. Using these data, PAMS calculates the costs and benefits of efficiency standards from two distinct but related perspectives: (1) The

  10. Standard for supply security. A minimum standard to guarantee the balance between electricity demand and supply for the long term

    International Nuclear Information System (INIS)

    Scheepers, M.J.J.; Van Werven, M.J.N.; Seebregts, A.J.; Poort, J.P.; De Nooij, M.; Baarsma, B.E.

    2004-05-01

    The development and use of a minimum reliability standard in the Dutch electricity market to guarantee an adequate balance between electricity demand and supply in the longer term are discussed. This standard can be based on the duration of a power outage and the related costs for society relative to the costs to prevent the power outage. The reliability standard can be translated in an adequacy standard when the reliability of foreign electricity supply to the Dutch market is taken into account. With a theoretical analysis and an assessment of the use of standards in foreign electricity markets and other sectors this study provides a survey of the use of standards in securing public interests. In electricity markets reliability standards can be used obligatory or only to inform market participants of the adequacy of supply preferred by consumers. If no standard is used, the market should rely on the economic incentives provided by contracts and liability. This study proposes to use a reliability standard for calculating the required generation capacity in an ex-ante market analysis using different future scenarios. On the basis of several market indicators, expected market developments can be monitored. Assessment of the market developments relative to the required generation capacity will give a signal to market participants with respect to the expected adequacy in the longer term (7 to 10 years). The assessment and the resulting signal should help to improve market transparency and assist producers, suppliers and consumers in their decisions towards an effective and efficient response on long-term market developments. Market monitoring results can be used by the government to take specific action, if necessary, to reduce barriers to invest. However, more general policy measures should not be linked to the monitoring results since this could provoke strategic behaviour [nl

  11. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  12. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  13. MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments

    NARCIS (Netherlands)

    Bustin, S.A.; Beaulieu, J.F.; Huggett, J.; Jaggi, R.; Kibenge, F.S.; Olsvik, P.A.; Penning, L.C.; Toegel, S.

    2010-01-01

    MIQE précis: Practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments Stephen A Bustin1 , Jean-François Beaulieu2 , Jim Huggett3 , Rolf Jaggi4 , Frederick SB Kibenge5 , Pål A Olsvik6 , Louis C Penning7 and Stefan Toegel8 1 Centre for

  14. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  15. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  16. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  17. Minimum qualifications for nuclear criticality safety professionals

    International Nuclear Information System (INIS)

    Ketzlach, N.

    1990-01-01

    A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals

  18. 29 CFR 4.159 - General minimum wage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...

  19. Adoption of projected mortality table for the Slovenian market using the Poisson log-bilinear model to test the minimum standard for valuing life annuities

    Directory of Open Access Journals (Sweden)

    Darko Medved

    2015-01-01

    Full Text Available With the introduction of Solvency II a consistent market approach to the valuation of insurance assets and liabilities is required. For the best estimate of life annuity provisions one should estimate the longevity risk of the insured population in Slovenia. In this paper the current minimum standard in Slovenia for calculating pension annuities is tested using the Lee-Carter model. In particular, the mortality of the Slovenian population is projected using the best fit from the stochastic mortality projections method. The projected mortality statistics are then corrected with the selection effect and compared with the current minimum standard.

  20. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Requirements for a minimum standard of care for phenylketonuria: the patients’ perspective

    Science.gov (United States)

    2013-01-01

    Phenylketonuria (PKU, ORPHA716) is an inherited disorder that affects about one in every 10,000 children born in Europe. Early and continuous application of a modified diet is largely successful in preventing the devastating brain damage associated with untreated PKU. The management of PKU is inconsistent: there are few national guidelines, and these tend to be incomplete and implemented sporadically. In this article, the first-ever pan- European patient/carer perspective on optimal PKU care, the European Society for Phenylketonuria and Allied Disorders (E.S.PKU) proposes recommendations for a minimum standard of care for PKU, to underpin the development of new pan-European guideline for the management of PKU. New standards of best practice should guarantee equal access to screening, treatment and monitoring throughout Europe. Screening protocols and interpretation of screening results should be standardised. Experienced Centres of Expertise are required, in line with current European Union policy, to guarantee a defined standard of multidisciplinary treatment and care for all medical and social aspects of PKU. Women of childbearing age require especially intensive management, due to the risk of severe risks to the foetus conferred by uncontrolled PKU. All aspects of treatment should be reimbursed to ensure uniform access across Europe to guideline-driven, evidence-based care. The E.S.PKU urges PKU healthcare professionals caring for people with PKU to take the lead in developing evidence based guidelines on PKU, while continuing to play an active role in serving as the voice of patients and their families, whose lives are affected by the condition. PMID:24341788

  2. The solar and interplanetary causes of the recent minimum in geomagnetic activity (MGA23: a combination of midlatitude small coronal holes, low IMF BZ variances, low solar wind speeds and low solar magnetic fields

    Directory of Open Access Journals (Sweden)

    B. T. Tsurutani

    2011-05-01

    Full Text Available Minima in geomagnetic activity (MGA at Earth at the ends of SC23 and SC22 have been identified. The two MGAs (called MGA23 and MGA22, respectively were present in 2009 and 1997, delayed from the sunspot number minima in 2008 and 1996 by ~1/2–1 years. Part of the solar and interplanetary causes of the MGAs were exceptionally low solar (and thus low interplanetary magnetic fields. Another important factor in MGA23 was the disappearance of equatorial and low latitude coronal holes and the appearance of midlatitude coronal holes. The location of the holes relative to the ecliptic plane led to low solar wind speeds and low IMF (Bz variances (σBz2 and normalized variances (σBz2/B02 at Earth, with concomitant reduced solar wind-magnetospheric energy coupling. One result was the lowest ap indices in the history of ap recording. The results presented here are used to comment on the possible solar and interplanetary causes of the low geomagnetic activity that occurred during the Maunder Minimum.

  3. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. 12 CFR 3.11 - Standards for determination of appropriate individual minimum capital ratios.

    Science.gov (United States)

    2010-01-01

    .../or affiliate(s); (d) The bank's liquidity, capital, risk asset and other ratios compared to the... individual minimum capital ratios. 3.11 Section 3.11 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital...

  8. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  9. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  10. An International Standard for specifying the minimum potency of anti-D blood-grouping reagents: evaluation of a candidate preparation in an international collaborative study

    NARCIS (Netherlands)

    Thorpe, S. J.; Fox, B.; Heath, A. B.; Scott, M.; de Haas, M.; Kochman, S.; Padilla, A.

    2006-01-01

    The aim of this study was to evaluate a lyophilized monoclonal immunoglobulin M (IgM) anti-D preparation for use as an International Standard to specify a recommended minimum acceptable potency of anti-D blood-grouping reagents. The candidate International Standard (99/836) for specifying the

  11. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  12. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  13. Confronting Regulatory Cost and Quality Expectations. An Exploration of Technical Change in Minimum Efficiency Performance Standards

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stanford Univ., CA (United States); Spurlock, C. Anna [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Hung-Chia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-09-21

    The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.

  14. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  15. 76 FR 4061 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-01-24

    ..., Randolph County, Takeoff Minimums and Obstacle DP, Orig El Dorado, KS, Captain Jack Thomas/El Dorado... Minimums and Obstacle DP, Amdt 3 Cook, MN, Cook Muni, Takeoff Minimums and Obstacle DP, Orig Duluth, MN...

  16. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  17. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  18. 12 CFR 932.8 - Minimum liquidity requirements.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum liquidity requirements. 932.8 Section... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.8 Minimum liquidity requirements. In addition to meeting the deposit liquidity requirements contained in § 965.3 of this chapter, each Bank...

  19. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  20. [Development of a computerized system using standard nursing language for creation of a nursing minimum data set].

    Science.gov (United States)

    D'Agostino, Fabio; Vellone, Ercole; Tontini, Francesco; Zega, Maurizio; Alvaro, Rosaria

    2012-01-01

    The aim of a nursing data set is to provide useful information for assessing the level of care and the state of health of the population. Currently, both in Italy and in other countries, this data is incomplete due to the lack of a structured nursing documentation , making it indispensible to develop a Nursing Minimum Data Set (NMDS) using standard nursing language to evaluate care, costs and health requirements. The aim of the project described , is to create a computer system using standard nursing terms with a dedicated software which will aid the decision-making process and provide the relative documentation. This will make it possible to monitor nursing activity and costs and their impact on patients' health : adequate training and involvement of nursing staff will play a fundamental role.

  1. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  2. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  3. Education and Criminal Justice: The Educational Approach to Prison Administration. The United Nations Standard Minimum Rules for the Treatment of Prisoners.

    Science.gov (United States)

    Morin, Lucien; Cosman, J. W.

    The United Nations Standard Minimum Rules for the Treatment of Prisoners do not express the basic principle that would support a serious educational approach to prison administration. The crucial missing rationale is the concept of the inherent dignity of the individual human prisoner. This concept has certain basic educational implications,…

  4. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    Science.gov (United States)

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  5. 12 CFR 931.3 - Minimum investment in capital stock.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank, both...

  6. Minimum information about a marker gene sequence (MIMARKS) and minimum information about any (x) sequence (MIxS) specifications

    DEFF Research Database (Denmark)

    Yilmaz, Pelin; Kottmann, Renzo; Field, Dawn

    2011-01-01

    Here we present a standard developed by the Genomic Standards Consortium (GSC) for reporting marker gene sequences--the minimum information about a marker gene sequence (MIMARKS). We also introduce a system for describing the environment from which a biological sample originates. The 'environment...

  7. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  8. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  9. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  10. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  11. Extent of Implementation of Minimum Standards of Basic Education for the Realisation of the Second Millennium Development Goal in Bayelsa State

    Science.gov (United States)

    Ogochukwu, Emeka; Gbendu, Olaowei Godiva

    2015-01-01

    The study was carried out in Salga Education Zone of Bayelsa State specifically to determine the extent of implementation of the minimum standards for basic education in order to ensure the realization of the second millennium development goal. The study adopted the descriptive research design. The population of the study comprised of all the…

  12. School Audits and School Improvement: Exploring the Variance Point Concept in Kentucky's... Schools

    Directory of Open Access Journals (Sweden)

    Robert Lyons

    2011-01-01

    Full Text Available As a diagnostic intervention (Bowles, Churchill, Effrat, & McDermott, 2002 for schools failing to meet school improvement goals, Ken-tucky used a scholastic audit process based on nine standards and 88 associated indicators called the Standards and Indicators for School Improvement (SISI. Schools are rated on a scale of 1–4 on each indicator, with a score of 3 considered as fully functional (Kentucky De-partment of Education [KDE], 2002. As part of enacting the legislation, KDE was required to also audit a random sample of schools that did meet school improvement goals; thereby identifying practices present in improving schools that are not present in those failing to improve. These practices were referred to as variance points, and were reported to school leaders annually. Variance points have differed from year to year, and the methodology used by KDE was unclear. Moreover, variance points were reported for all schools without differentiating based upon the level of school (elementary, middle, or high. In this study, we established a transparent methodology for variance point determination that differentiates between elementary, middle, and high schools.

  13. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    Science.gov (United States)

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  14. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  15. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  16. 12 CFR 567.2 - Minimum regulatory capital requirement.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum regulatory capital requirement. 567.2... Regulatory Capital Requirements § 567.2 Minimum regulatory capital requirement. (a) To meet its regulatory capital requirement a savings association must satisfy each of the following capital standards: (1) Risk...

  17. Depolarization in Delivering Public Services? Impacts of Minimum Service Standards (MSS on the Quality of Health Services in Indonesia

    Directory of Open Access Journals (Sweden)

    Mohammad Roudo

    2016-03-01

    Full Text Available Abstract. Some scholars argue that decentralization policy tends to create polarization, i.e. an increase of inequality/disparity among districts. To deal with this problem, Minimum Service Standards (MSS were introduced as a key strategy in decentralizing Indonesia. In this research, we tried to find out through MSS performance measurements whether imposing standards can be effective in a decentralized system by seeking its impacts on polarization/depolarization in the delivery of public services, specifically in the health sector. This question is basically a response to the common criticism that decentralization is good to create equality between central government and local governments but often does not work to achieve equality among local governments. Using self-assessment data from a sample of 54 districts from 534 districts in Indonesia, from 2010 to 2013, we found that the existence of depolarization in the delivery of public services could potentially occur among regions by reducing the gap between their public service performance and the targets of MSS. We acknowledge that there are weaknesses in the validity of the self-assessment data, caused by a lack of knowledge and skills to execute the self-assessment according to the official guidelines, by the overrating of target achievements, as well as the lack of data from independent sources to confirm the self-assessment outcomes. We also acknowledge that differences in financial capacity are still the main determinant why one district is more successful in achieving the MSS targets compared to other districts. Keywords. Decentralization, Public Service, Minimum Standard Service

  18. Use of variance techniques to measure dry air-surface exchange rates

    Science.gov (United States)

    Wesely, M. L.

    1988-07-01

    The variances of fluctuations of scalar quantities can be measured and interpreted to yield indirect estimates of their vertical fluxes in the atmospheric surface layer. Strong correlations among scalar fluctuations indicate a similarity of transfer mechanisms, which is utilized in some of the variance techniques. The ratios of the standard deviations of two scalar quantities, for example, can be used to estimate the flux of one if the flux of the other is measured, without knowledge of atmospheric stability. This is akin to a modified Bowen ratio approach. Other methods such as the normalized standard-deviation technique and the correlation-coefficient technique can be utilized effectively if atmospheric stability is evaluated and certain semi-empirical functions are known. In these cases, iterative calculations involving measured variances of fluctuations of temperature and vertical wind velocity can be used in place of direct flux measurements. For a chemical sensor whose output is contaminated by non-atmospheric noise, covariances with fluctuations of scalar quantities measured with a very good signal-to-noise ratio can be used to extract the needed standard deviation. Field measurements have shown that many of these approaches are successful for gases such as ozone and sulfur dioxide, as well as for temperature and water vapor, and could be extended to other trace substances. In humid areas, it appears that water vapor fluctuations often have a higher degree of correlation to fluctuations of other trace gases than do temperature fluctuations; this makes water vapor a more reliable companion or “reference” scalar. These techniques provide some reliable research approaches but, for routine or operational measurement, they are limited by the need for fast-response sensors. Also, all variance approaches require some independent means to estimate the direction of the flux.

  19. The Canadian minimum dataset for chronic low back pain research: a cross-cultural adaptation of the National Institutes of Health Task Force Research Standards.

    Science.gov (United States)

    Lacasse, Anaïs; Roy, Jean-Sébastien; Parent, Alexandre J; Noushi, Nioushah; Odenigbo, Chúk; Pagé, Gabrielle; Beaudet, Nicolas; Choinière, Manon; Stone, Laura S; Ware, Mark A

    2017-01-01

    To better standardize clinical and epidemiological studies about the prevalence, risk factors, prognosis, impact and treatment of chronic low back pain, a minimum data set was developed by the National Institutes of Health (NIH) Task Force on Research Standards for Chronic Low Back Pain. The aim of the present study was to develop a culturally adapted questionnaire that could be used for chronic low back pain research among French-speaking populations in Canada. The adaptation of the French Canadian version of the minimum data set was achieved according to guidelines for the cross-cultural adaptation of self-reported measures (double forward-backward translation, expert committee, pretest among 35 patients with pain in the low back region). Minor cultural adaptations were also incorporated into the English version by the expert committee (e.g., items about race/ethnicity, education level). This cross-cultural adaptation provides an equivalent French-Canadian version of the minimal data set questionnaire and a culturally adapted English-Canadian version. Modifications made to the original NIH minimum data set were minimized to facilitate comparison between the Canadian and American versions. The present study is a first step toward the use of a culturally adapted instrument for phenotyping French- and English-speaking low back pain patients in Canada. Clinicians and researchers will recognize the importance of this standardized tool and are encouraged to incorporate it into future research studies on chronic low back pain.

  20. 47 CFR 25.205 - Minimum angle of antenna elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Minimum angle of antenna elevation. 25.205 Section 25.205 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.205 Minimum angle of antenna elevation. (a) Earth station...

  1. The SME gauge sector with minimum length

    Science.gov (United States)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  2. The SME gauge sector with minimum length

    Energy Technology Data Exchange (ETDEWEB)

    Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)

    2017-12-15

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)

  3. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  4. 76 FR 61038 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-10-03

    ... types of SIAPs and the effective dates of the, associated Takeoff Minimums and ODPs. This amendment also..., TN, Memphis Intl, Takeoff Minimums and Obstacle DP, Amdt 3 Dallas, TX, Dallas-Love Field, Takeoff...

  5. 75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls

    Science.gov (United States)

    2010-02-09

    ..., channels, or shore- line or river-bank protection systems such as revetments, sand dunes, and barrier...) toe (subject to preexisting right-of-way). f. The vegetation variance process is not a mechanism to...

  6. The minimum measurable dose of the sensitive Harshaw TLDs

    International Nuclear Information System (INIS)

    Ben-Shachar, B.; German, U.; Naim, E.

    1991-01-01

    The TL-dose response was measured for the sensitive Harshaw manufactured phosphors (CaF 2 :Dy and CaF 2 :Tm), taking chips from the same batch and from different batches. The relative standard deviations were fitted to a semiempirical expression, from which the minimum measurable doses were derived and compared to the minimum measurable dose calculated by taking 3 times the standard deviation of unirradiated chips. The contribution of the individual calibration of each TLD chip was checked, as well

  7. 76 FR 8291 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-02-14

    ... specifies the types of SIAPs and the effective dates of the associated Takeoff Minimums and ODPs. This..., Takeoff Minimums and Obstacle DP, Orig Dallas, TX, Dallas Love Field, RNAV (GPS) Z RWY 13L, Orig-B...

  8. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  9. Bounds on Minimum Energy per Bit for Optical Wireless Relay Channels

    Directory of Open Access Journals (Sweden)

    A. D. Raza

    2014-09-01

    Full Text Available An optical wireless relay channel (OWRC is the classical three node network consisting of source, re- lay and destination nodes with optical wireless connectivity. The channel law is assumed Gaussian. This paper studies the bounds on minimum energy per bit required for reliable communication over an OWRC. It is shown that capacity of an OWRC is concave and energy per bit is monotonically increasing in square of the peak optical signal power, and consequently the minimum energy per bit is inversely pro- portional to the square root of asymptotic capacity at low signal to noise ratio. This has been used to develop upper and lower bound on energy per bit as a function of peak signal power, mean to peak power ratio, and variance of channel noise. The upper and lower bounds on minimum energy per bit derived in this paper correspond respectively to the decode and forward lower bound and the min-max cut upper bound on OWRC capacity

  10. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  11. From Minimum Wage to Standard Work Hour: HKSAR Labour Politics in Regime Change

    Directory of Open Access Journals (Sweden)

    Lawrence K. K. Ho

    2013-01-01

    Full Text Available This paper aims to highlight the significance of labour issues – namely, the minimum wage (MW and standard working hours (SWH – in shaping candidates’ electoral platforms in the 2012 chief executive (CE election of the Hong Kong Special Administrative Region (HKSAR under the sovereignty of the People’s Republic of China (PRC. We first offer a brief review of labour politics regarding the MW case as a precursor to the SWH drafting and enactment process. We then provide an analytical delineation of some of the labour and socio-economic dimensions of the CE electoral contest by comparing the candidates’ campaign planks in relation to SWH. We then attempt to predict the likely course of the SWH debate under the leadership of Leung Chun-ying, who eventually won the CE election and assumed power on 1 July 2012. We conclude by examining Leung’s social engineering attempts to increase popular support amongst low- and middle-income (LMI households as part of his long-term strategy for the 2017 CE elections and his broader Beijing-entrusted political agenda.

  12. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  13. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Industries eligible for minimum wage phase-in. 510.22... REGULATIONS IMPLEMENTATION OF THE MINIMUM WAGE PROVISIONS OF THE 1989 AMENDMENTS TO THE FAIR LABOR STANDARDS ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in...

  14. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  15. Solution of the problem of the identified minimum for the tri-variate ...

    Indian Academy of Sciences (India)

    tified minimum is considered below has zero means, and distinct variances. The solution ... and a non-singular covariance matrix , where ij = ρij σi σj for i ...... (i) through (iv) above, we can use (4.29) to identify a2. 21. , a2. 31. , a2. 12. , a2. 32 uniquely. Now we consider (4.28). In this case, there are two possibilities: (A2. 1, B2.

  16. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  17. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; Harmon-Smith, Miranda; Doud, Devin; Reddy, T. B. K.; Schulz, Frederik; Jarett, Jessica; Rivers, Adam R.; Eloe-Fadrosh, Emiley A.; Tringe, Susannah G.; Ivanova, Natalia N.; Copeland, Alex; Clum, Alicia; Becraft, Eric D.; Malmstrom, Rex R.; Birren, Bruce; Podar, Mircea; Bork, Peer; Weinstock, George M.; Garrity, George M.; Dodsworth, Jeremy A.; Yooseph, Shibu; Sutton, Granger; Glöckner, Frank O.; Gilbert, Jack A.; Nelson, William C.; Hallam, Steven J.; Jungbluth, Sean P.; Ettema, Thijs J. G.; Tighe, Scott; Konstantinidis, Konstantinos T.; Liu, Wen-Tso; Baker, Brett J.; Rattei, Thomas; Eisen, Jonathan A.; Hedlund, Brian; McMahon, Katherine D.; Fierer, Noah; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Tyson, Gene W.; Rinke, Christian; Kyrpides, Nikos C.; Schriml, Lynn; Garrity, George M.; Hugenholtz, Philip; Sutton, Granger; Yilmaz, Pelin; Meyer, Folker; Glöckner, Frank O.; Gilbert, Jack A.; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Lapidus, Alla; Meyer, Folker; Yilmaz, Pelin; Parks, Donovan H.; Eren, A. M.; Schriml, Lynn; Banfield, Jillian F.; Hugenholtz, Philip; Woyke, Tanja

    2017-08-08

    We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.

  18. 78 FR 7650 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-02-04

    ... amendments may have been issued previously by the FAA in a Flight Data Center (FDC) Notice to Airmen (NOTAM... close and immediate relationship between these SIAPs, Takeoff Minimums and ODPs, and safety in air... Vernon, WA, Skagit Rgnl, Takeoff Minimums and Obstacle DP, Amdt 2 Greybull, WY, South Big Horn County...

  19. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  20. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  1. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  2. Bounds on the Higgs mass in the standard model and the minimal supersymmetric standard model

    CERN Document Server

    Quiros, M.

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the {\\bf Standard Model} can develop a non-standard minimum for values of the field much larger than the weak scale. In those cases the standard minimum becomes metastable and the possibility of decay to the non-standard one arises. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M_H, M_t) plane, where the Higgs field is sitting at the standard electroweak minimum. In the {\\bf Minimal Supersymmetric Standard Model}, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out, providing an excellent approximation to the numerical results which include all next-to-leading-log corrections. An appropriate treatment of squark decoupling allows to consider large values of the stop and/or sbottom mixing parameters and thus fix a reliable upper bound on the mass o...

  3. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  4. Waterberg coal characteristics and SO2 minimum emissions standards in South African power plants.

    Science.gov (United States)

    Makgato, Stanford S; Chirwa, Evans M Nkhalambayausi

    2017-10-01

    Key characteristics of coal samples from the supply stock to the newly commissioned South African National Power Utility's (Eskom's) Medupi Power Station - which receives its supply coal from the Waterberg coalfield in Lephalale (Limpopo Province, South Africa) - were evaluated. Conventional coal characterisation such as proximate and ultimate analysis as well as determination of sulphur forms in coal samples were carried out following the ASTM and ISO standards. Coal was classified as medium sulphur coal when the sulphur content was detected in the range 1.15-1.49 wt.% with pyritic sulphur (≥0.51 wt.%) and organic sulphur (≥0.49 wt.%) accounted for the bulk of the total sulphur in coal. Maceral analyses of coal showed that vitrinite was the dominant maceral (up to 51.8 vol.%), whereas inertinite, liptinite, reactive semifusinite and visible minerals occurred in proportions of 22.6 vol.%, 2.9 vol.%, 5.3 vol.% and 17.5 vol.%, respectively. Theoretical calculations were developed and used to predict the resultant SO 2 emissions from the combustion of the Waterberg coal in a typical power plant. The sulphur content requirements to comply with the minimum emissions standards of 3500 mg/Nm 3 and 500 mg/Nm 3 were found to be ≤1.37 wt.% and ≤0.20 wt.%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. 76 FR 47985 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-08-08

    ...;Prices of new books are listed in the first FEDERAL REGISTER issue of each #0;week. #0; #0; #0; #0;#0... and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA... Minimums and ODPs, in addition to their complex nature and the need for a special format make publication...

  6. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  7. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  8. A random variance model for detection of differential gene expression in small microarray experiments.

    Science.gov (United States)

    Wright, George W; Simon, Richard M

    2003-12-12

    Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf

  9. The Impact of Minimum Energy Performance Standards (MEPS) Regulation on Electricity Saving in Malaysia

    Science.gov (United States)

    Fatihah Salleh, Siti; Eqwan Roslan, Mohd; Isa, Aishah Mohd; Faizal Basri Nair, Mohd; Syafiqah Salleh, Siti

    2018-03-01

    One of Malaysia’s key strategies to promote efficient energy use in the country is to implement the minimum energy performance standards (MEPS) through the Electricity Regulations (Amendment) 2013. Five selected electrical appliances (refrigerator, air conditioner, television, domestic fans and lamp fittings) must comply with MEPS requirement in order to be sold in Malaysian market. Manufacturers, importers or distributors are issued Certificate of Approval (COA) if products are MEPS-compliant. In 2015, 1,215 COAs were issued but the number of MEPS products in the market is unknown. This work collects sales data from major manufacturers to estimate the annual sales of MEPS appliances and the cumulative electricity consumption and electricity saving. It was found that most products sold have 3-star rating and above. By year 2015, total cumulative electricity savings gained from MEPS implementation is 3,645 GWh, with air conditioner being the highest contributor (30%). In the future, it is recommended that more MEPS products and related incentives be introduced to further improve efficiency of energy use in Malaysia.

  10. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  11. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  12. 7 CFR 35.13 - Minimum quantity.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... part, transport or receive for transportation to any foreign destination, a shipment of 25 packages or...

  13. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  14. 24 CFR 200.925a - Multifamily and care-type minimum property standards.

    Science.gov (United States)

    2010-04-01

    ... COMMISSIONER, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property..., electrical, and elevators. (3) For purposes of this paragraph, a state or local code regulates an area if it...

  15. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  16. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  17. Minimum Wage Policy and Country’s Technical Efficiency

    OpenAIRE

    Karim, Mohd Zaini Abd; Chan, Sok-Gee; Hassan, Sallahuddin

    2016-01-01

    Recently, the government has decided that Malaysia would introduce a minimum wage policy. However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness. Although standard economic theory unambiguously implies that wage floors have a negative impact on employment, the existing empirical literature is not so clear. Some studies have found the expected negative impa...

  18. 7 CFR 33.10 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... shipment of apples to any foreign destination unless: (a) Apples grade at least U.S. No. 1 or U.S. No. 1...

  19. 40 CFR 260.32 - Variances to be classified as a boiler.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Variances to be classified as a boiler... be classified as a boiler. In accordance with the standards and criteria in § 260.10 (definition of “boiler”), and the procedures in § 260.33, the Administrator may determine on a case-by-case basis that...

  20. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  1. The minimum wage in the Czech enterprises

    Directory of Open Access Journals (Sweden)

    Eva Lajtkepová

    2010-01-01

    Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.

  2. 77 FR 58707 - Minimum Internal Control Standards

    Science.gov (United States)

    2012-09-21

    ..., if any), counted, inventoried, and secured by an authorized agent. (ii) Bingo card inventory records... comprehensive standards for the entire Class II gaming environment. The new sections include, for example: Card games; drop and count; surveillance; and gaming promotions and player tracking. The amendments also...

  3. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  4. [Hospitals failing minimum volumes in 2004: reasons and consequences].

    Science.gov (United States)

    Geraedts, M; Kühnen, C; Cruppé, W de; Blum, K; Ohmann, C

    2008-02-01

    In 2004 Germany introduced annual minimum volumes nationwide on five surgical procedures: kidney, liver, stem cell transplantation, complex oesophageal, and pancreatic interventions. Hospitals that fail to reach the minimum volumes are no longer allowed to perform the respective procedures unless they raise one of eight legally accepted exceptions. The goal of our study was to investigate how many hospitals fell short of the minimum volumes in 2004, whether and how this was justified, and whether hospitals that failed the requirements experienced any consequences. We analysed data on meeting the minimum volume requirements in 2004 that all German hospitals were obliged to publish as part of their biannual structured quality reports. We performed telephone interviews: a) with all hospitals not achieving the minimum volumes for complex oesophageal, and pancreatic interventions, and b) with the national umbrella organisations of all German sickness funds. In 2004, one quarter of all German acute care hospitals (N=485) performed 23,128 procedures where minimum volumes applied. 197 hospitals (41%) did not meet at least one of the minimum volumes. These hospitals performed N=715 procedures (3.1%) where the minimum volumes were not met. In 43% of these cases the hospitals raised legally accepted exceptions. In 33% of the cases the hospitals argued using reasons that were not legally acknowledged. 69% of those hospitals that failed to achieve the minimum volumes for complex oesophageal and pancreatic interventions did not experience any consequences from the sickness funds. However, one third of those hospitals reported that the sickness funds addressed the issue and partially announced consequences for the future. The sickness funds' umbrella organisations stated that there were only sparse activities related to the minimum volumes and that neither uniform registrations nor uniform proceedings in case of infringements of the standards had been agreed upon. In spite of the

  5. Split-plot fractional designs: Is minimum aberration enough?

    DEFF Research Database (Denmark)

    Kulahci, Murat; Ramirez, Jose; Tobias, Randy

    2006-01-01

    Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....

  6. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  7. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  8. 78 FR 63873 - Minimum Internal Control Standards

    Science.gov (United States)

    2013-10-25

    ...--though less stringent than--the drop and count process for player interfaces and card tables. By removing... count standards for player interfaces and card games, and intends to address the issue comprehensively... surveillance of kiosks and for the collection and count of their contents. The Commission published a proposed...

  9. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  10. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  12. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  13. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  14. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  15. Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): standardised reporting for model reproducibility, interoperability, and data sharing

    NARCIS (Netherlands)

    Quinn, T. A.; Granite, S.; Allessie, M. A.; Antzelevitch, C.; Bollensdorff, C.; Bub, G.; Burton, R. A. B.; Cerbai, E.; Chen, P. S.; Delmar, M.; DiFrancesco, D.; Earm, Y. E.; Efimov, I. R.; Egger, M.; Entcheva, E.; Fink, M.; Fischmeister, R.; Franz, M. R.; Garny, A.; Giles, W. R.; Hannes, T.; Harding, S. E.; Hunter, P. J.; Iribe, G.; Jalife, J.; Johnson, C. R.; Kass, R. S.; Kodama, I.; Koren, G.; Lord, P.; Markhasin, V. S.; Matsuoka, S.; McCulloch, A. D.; Mirams, G. R.; Morley, G. E.; Nattel, S.; Noble, D.; Olesen, S. P.; Panfilov, A. V.; Trayanova, N. A.; Ravens, U.; Richard, S.; Rosenbaum, D. S.; Rudy, Y.; Sachs, F.; Sachse, F. B.; Saint, D. A.; Schotten, U.; Solovyova, O.; Taggart, P.; Tung, L.; Varró, A.; Volders, P. G.; Wang, K.; Weiss, J. N.; Wettwer, E.; White, E.; Wilders, R.; Winslow, R. L.; Kohl, P.

    2011-01-01

    Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step towards establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment

  16. The"minimum information about an environmental sequence" (MIENS) specification

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J.R.; Amaral-Zettler, L.; Gilbert, J.A.; Karsch-Mizrachi, I.; Johnston, A.; Cochrane, G.; Vaughan, R.; Hunter, C.; Park, J.; Morrison, N.; Rocca-Serra, P.; Sterk, P.; Arumugam, M.; Baumgartner, L.; Birren, B.W.; Blaser, M.J.; Bonazzi, V.; Bork, P.; Buttigieg, P. L.; Chain, P.; Costello, E.K.; Huot-Creasy, H.; Dawyndt, P.; DeSantis, T.; Fierer, N.; Fuhrman, J.; Gallery, R.E.; Gibbs, R.A.; Giglio, M.G.; Gil, I. San; Gonzalez, A.; Gordon, J.I.; Guralnick, R.; Hankeln, W.; Highlander, S.; Hugenholtz, P.; Jansson, J.; Kennedy, J.; Knights, D.; Koren, O.; Kuczynski, J.; Kyrpides, N.; Larsen, R.; Lauber, C.L.; Legg, T.; Ley, R.E.; Lozupone, C.A.; Ludwig, W.; Lyons, D.; Maguire, E.; Methe, B.A.; Meyer, F.; Nakieny, S.; Nelson, K.E.; Nemergut, D.; Neufeld, J.D.; Pace, N.R.; Palanisamy, G.; Peplies, J.; Peterson, J.; Petrosino, J.; Proctor, L.; Raes, J.; Ratnasingham, S.; Ravel, J.; Relman, D.A.; Assunta-Sansone, S.; Schriml, L.; Sodergren, E.; Spor, A.; Stombaugh, J.; Tiedje, J.M.; Ward, D.V.; Weinstock, G.M.; Wendel, D.; White, O.; Wikle, A.; Wortman, J.R.; Glockner, F.O.; Bushman, F.D.; Charlson, E.; Gevers, D.; Kelley, S.T.; Neubold, L.K.; Oliver, A.E.; Pruesse, E.; Quast, C.; Schloss, P.D.; Sinha, R.; Whitely, A.

    2010-10-15

    We present the Genomic Standards Consortium's (GSC) 'Minimum Information about an ENvironmental Sequence' (MIENS) standard for describing marker genes. Adoption of MIENS will enhance our ability to analyze natural genetic diversity across the Tree of Life as it is currently being documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.

  17. Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances

    Institute of Scientific and Technical Information of China (English)

    QI Wen-Juan; ZHANG Peng; DENG Zi-Li

    2014-01-01

    This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.

  18. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  19. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  20. Recommending a minimum English proficiency standard for entry-level nursing.

    Science.gov (United States)

    O'Neill, Thomas R; Tannenbaum, Richard J; Tiffen, Jennifer

    2005-01-01

    When nurses who are educated internationally immigrate to the United States, they are expected to have English language proficiency in order to function as a competent nurse. The purpose of this research was to provide sufficient information to the National Council of State Boards of Nursing (NCSBN) to make a defensible recommended passing standard for English proficiency. This standard was based upon the Test of English as a Foreign Language (TOEFL). A large panel of nurses and nurse regulators (N = 25) was convened to determine how much English proficiency is required to be minimally competent as an entry-level nurse. Two standard setting procedures, the Simulated Minimally Competent Candidate (SMCC) procedure and the Examinee Paper Selection Method, were combined to produce recommendations for each panelist. In conjunction with collateral information, these recommendations were reviewed by the NCSBN Examination Committee, which decided upon an NCSBN recommended standard, a TOEFL score of 220. Because the adoption of this standard rests entirely with the individual state, NCSBN has little more to do with implementing the standard, other than answering questions and providing documentation about the standard.

  1. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  2. 76 FR 61723 - Agency Information Collection Activities: Minimum Standards for Driver's Licenses and...

    Science.gov (United States)

    2011-10-05

    ... lawful status will only be allowed to demonstrate U.S. citizenship. The state must retain copies or... copies of these documents must be retained for a minimum of seven years. Digital images of these...

  3. 76 FR 42132 - Agency Information Collection Activities: Minimum Standards for Driver's Licenses and...

    Science.gov (United States)

    2011-07-18

    ... lawful status will only be allowed to demonstrate U.S. citizenship. The state must retain copies or... copies of these documents must be retained for a minimum of seven years. Digital images of these...

  4. Intercentre variance in patient reported outcomes is lower than objective rheumatoid arthritis activity measures

    DEFF Research Database (Denmark)

    Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena

    2017-01-01

    Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...

  5. Anesthesiologists' perceptions of minimum acceptable work habits of nurse anesthetists.

    Science.gov (United States)

    Logvinov, Ilana I; Dexter, Franklin; Hindman, Bradley J; Brull, Sorin J

    2017-05-01

    Work habits are non-technical skills that are an important part of job performance. Although non-technical skills are usually evaluated on a relative basis (i.e., "grading on a curve"), validity of evaluation on an absolute basis (i.e., "minimum passing score") needs to be determined. Survey and observational study. None. None. The theme of "work habits" was assessed using a modification of Dannefer et al.'s 6-item scale, with scores ranging from 1 (lowest performance) to 5 (highest performance). E-mail invitations were sent to all consultant and fellow anesthesiologists at Mayo Clinic in Florida, Arizona, and Minnesota. Because work habits expectations can be generational, the survey was designed for adjustment based on all invited (responding or non-responding) anesthesiologists' year of graduation from residency. The overall mean±standard deviation of the score for anesthesiologists' minimum expectations of nurse anesthetists' work habits was 3.64±0.66 (N=48). Minimum acceptable scores were correlated with the year of graduation from anesthesia residency (linear regression P=0.004). Adjusting for survey non-response using all N=207 anesthesiologists, the mean of the minimum acceptable work habits adjusted for year of graduation was 3.69 (standard error 0.02). The minimum expectations for nurse anesthetists' work habits were compared with observational data obtained from the University of Iowa. Among 8940 individual nurse anesthetist work habits scores, only 2.6% were habits scores were significantly greater than the Mayo estimate (3.69) for the minimum expectations; all Phabits of nurse anesthetists within departments should not be compared with an appropriate minimum score (i.e., of 3.69). Instead, work habits scores should be analyzed based on relative reporting among anesthetists. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  7. Minimum Distance Estimation on Time Series Analysis With Little Data

    National Research Council Canada - National Science Library

    Tekin, Hakan

    2001-01-01

    .... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...

  8. The migratory impact of minimum wage legislation: Puerto Rico, 1970-1987.

    Science.gov (United States)

    Santiago, C E

    1993-01-01

    "This study examines the impact of minimum wage setting on labor migration. A multiple time series framework is applied to monthly data for Puerto Rico from 1970-1987. The results show that net emigration from Puerto Rico to the United States fell in response to significant changes in the manner in which minimum wage policy was conducted, particularly after 1974. The extent of commuter type labor migration between Puerto Rico and the United States is influenced by minimum wage policy, with potentially important consequences for human capital investment and long-term standards of living." excerpt

  9. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  10. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  11. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  12. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  13. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  14. The Minimum Core for Numeracy Audit and Test

    CERN Document Server

    Patmore, Mark

    2008-01-01

    This book supports trainee teachers in the Lifelong Learning Sector in the assessment of their numeracy knowledge. A self-audit section is included to help trainees understand their level of competence and confidence in numeracy and will help them identify any gaps in their knowledge and skills. This is followed by exercises and activities to support and enhance learning. The book covers all the content of the LLUK standards for the minimum core for numeracy. Coverage and assessment of the minimum core have to be embedded in all Certificate and Diploma courses leading to QTLS and ATLS status.

  15. A comparison of 3-D computed tomography versus 2-D radiography measurements of ulnar variance and ulnolunate distance during forearm rotation.

    Science.gov (United States)

    Kawanishi, Y; Moritomo, H; Omori, S; Kataoka, T; Murase, T; Sugamoto, K

    2014-06-01

    Positive ulnar variance is associated with ulnar impaction syndrome and ulnar variance is reported to increase with pronation. However, radiographic measurement can be affected markedly by the incident angle of the X-ray beam. We performed three-dimensional (3-D) computed tomography measurements of ulnar variance and ulnolunate distance during forearm rotation and compared these with plain radiographic measurements in 15 healthy wrists. From supination to pronation, ulnar variance increased in all cases on the radiographs; mean ulnar variance increased significantly and mean ulnolunate distance decreased significantly. However on 3-D imaging, ulna variance decreased in 12 cases on moving into pronation and increased in three cases; neither the mean ulnar variance nor mean ulnolunate distance changed significantly. Our results suggest that the forearm position in which ulnar variance increased varies among individuals. This may explain why some patients with ulnar impaction syndrome complain of wrist pain exacerbated by forearm supination. It also suggests that standard radiographic assessments of ulnar variance are unreliable. © The Author(s) 2013.

  16. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  17. Energy Efficiency: The Implementation of Minimum Energy Performance Standard (MEPS Application on Home Appliances for Residential

    Directory of Open Access Journals (Sweden)

    Rahman K.A

    2016-01-01

    Full Text Available Generally, Minimum Energy Performance Standard (MEPS has been widespread across the country especially developed country. However, most consumers do not even know about the MEPS. Without sufficient knowledge, much energy have been wasted before this. The aim of this study is to review the implementation of MEPS of Asia country and to compare electricity consumption of home appliances with star rating and without star rating. In order to fulfil the objectives of the study, the equipment must be chosen correctly and must be learned properly. The home appliances that will be used also need to be chosen so that the comparison between the appliances will be matched correctly. To understand the results, the analysis was done using graphs and table. The purpose of using graph and table is to understand the comparison between appliances more clearly. The results show that home appliances with MEPS is more efficient on energy saving rather than without MEPS. This is the evidence as a method to educate a consumer on energy saving.

  18. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  19. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  20. Calculation of the minimum critical mass of fissile nuclides

    International Nuclear Information System (INIS)

    Wright, R.Q.; Hopper, Calvin Mitchell

    2008-01-01

    The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.

  1. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  2. Uncertainty and minimum detectable concentrations using relative, absolute and K*0-IAEA standardization for the INAA laboratory of the ETRR-2

    International Nuclear Information System (INIS)

    Khalil, M. Y.

    2006-01-01

    Full text: The Instrumental Neutron Activation Analysis (INAA) Laboratory of Egypt Second Training and Research Reactor (ETRR-2) is increasingly requested to perform multi-element analysis to large number of samples from different origins. The INAA laboratory has to demonstrate competence by conforming to appropriate internationally and nationally accepted standards. The objective of this work is to determine the uncertainty budget and sensitivity of the INAA laboratory measurements. Concentrations of 9 elements; Mn, Na, K, Ca, Co, Cr, Fe, Rb, and Cs, were measured against a certified test sample. Relative, absolute, and Ko-IAEA standardization methods were employed and results compared. The flux was monitored using cadmium covered gold method, and multifoil (gold, nickel and zirconium) method. The combined and expanded uncertainties were estimated. Uncertainty of concentrations ranged between 2-21% depending on the standardization method used. The relative method, giving the lowest uncertainty, produced uncertainty budget between 2 and 11%. The minimum detectable concentration was the lowest for Cs ranging between 0.36 and 0.59 ppb and the highest being for K in the range of 0.32 to 8.64 ppm

  3. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  4. Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.

    Science.gov (United States)

    Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong

    2018-03-01

    Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.

  5. 36 CFR 223.61 - Establishing minimum stumpage rates.

    Science.gov (United States)

    2010-07-01

    ... AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER Timber Sale Contracts Appraisal and Pricing.... No timber may be sold or cut under timber sale contracts for less than minimum stumpage rates except... amounts of material not meeting utilization standards of the timber sale contract. For any timber sale...

  6. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?

  7. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  8. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  9. 29 CFR 520.200 - What is the legal authority for payment of wages lower than the minimum wage required by section...

    Science.gov (United States)

    2010-07-01

    ... the minimum wage required by section 6(a) of the Fair Labor Standards Act? 520.200 Section 520.200... lower than the minimum wage required by section 6(a) of the Fair Labor Standards Act? Section 14(a) of..., for the payment of special minimum wage rates to workers employed as messengers, learners (including...

  10. CONSEQUENCES OF INCREASING THE MINIMUM WAGE IN UKRAINE TWICE

    Directory of Open Access Journals (Sweden)

    Volodymyr Boreiko

    2017-03-01

    Full Text Available In the article the views of scientists on the role of incomes of the poorest people in providing of economic development of the country and consequences of increasing the minimum wage in Ukraine twice are investigated; the dynamics of change in Ukraine minimum wage during a decade are analyzed and decline in living standards of population during this period is shown; measures, which should be taken for non-inflationary growth in incomes of the population, are grounded; it is disclosed that such measures should include unification of income tax for individuals and single social contribution and restoration of a progressive taxation of incomes of the working population. Key words: minimum wage, household income, the poorest part of the population, the economy of country, tax system.

  11. Is the minimum enough? Affordability of a nutritious diet for minimum wage earners in Nova Scotia (2002-2012).

    Science.gov (United States)

    Newell, Felicia D; Williams, Patricia L; Watt, Cynthia G

    2014-05-09

    This paper aims to assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia (NS) from 2002 to 2012 using an economic simulation that includes food costing and secondary data. The cost of the National Nutritious Food Basket (NNFB) was assessed with a stratified, random sample of grocery stores in NS during six time periods: 2002, 2004/2005, 2007, 2008, 2010 and 2012. The NNFB's cost was factored into affordability scenarios for three different household types relying on minimum wage earnings: a household of four; a lone mother with three children; and a lone man. Essential monthly living expenses were deducted from monthly net incomes using methods that were standardized from 2002 to 2012 to determine whether adequate funds remained to purchase a basic nutritious diet across the six time periods. A 79% increase to the minimum wage in NS has resulted in a decrease in the potential deficit faced by each household scenario in the period examined. However, the household of four and the lone mother with three children would still face monthly deficits ($44.89 and $496.77, respectively, in 2012) if they were to purchase a nutritiously sufficient diet. As a social determinant of health, risk of food insecurity is a critical public health issue for low wage earners. While it is essential to increase the minimum wage in the short term, adequately addressing income adequacy in NS and elsewhere requires a shift in thinking from a focus on minimum wage towards more comprehensive policies ensuring an adequate livable income for everyone.

  12. 77 FR 71494 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-12-03

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., and the need for a special format make their verbatim publication in the Federal Register expensive...

  13. 78 FR 50326 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-08-19

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., and the need for a special format make their verbatim publication in the Federal Register expensive...

  14. 78 FR 56829 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-09-16

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., and the need for a special format make their verbatim publication in the Federal Register expensive...

  15. 78 FR 68704 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-11-15

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., and the need for a special format make their verbatim publication in the Federal Register expensive...

  16. 77 FR 56762 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-09-14

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., and the need for a special format make their verbatim publication in the Federal Register expensive...

  17. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  18. 75 FR 76626 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-12-09

    ..., Norman County Ada/Twin Valley, Takeoff Minimums and Obstacle DP, Orig Buffalo, MN, Buffalo Muni, Takeoff... 10 Bowling Green, OH, Wood County, RNAV (GPS) RWY 10, Orig-B Bowling Green, OH, Wood County, RNAV...

  19. 75 FR 22215 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-04-28

    ... and Obstacle DP, Amdt 1 Pauls Valley, OK, Pauls Valley Muni, RNAV (GPS) RWY 17, Orig Madras, OR..., Amdt 1 Honesdale, PA, Cherry Ridge, Takeoff Minimums and Obstacle DP, Amdt 4 Pickens, SC, Pickens...

  20. 77 FR 5694 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-02-06

    ... Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA-200), FAA Headquarters..., their complex nature, and the need for a special format make their verbatim publication in the Federal...

  1. 76 FR 47988 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-08-08

    ... Minimums and ODP copies may be obtained from: 1.FAA Public Inquiry Center (APA-200), FAA Headquarters..., their complex nature, and the need for a special format make their verbatim publication in the Federal...

  2. 75 FR 25760 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-05-10

    ..., ME, Wiscasset, Takeoff Minimums and Obstacle DP, Amdt 2 Alpena, MI, Alpena County Rgnl, RNAV (GPS) RWY 19, Orig Alpena, MI, Alpena County Rgnl, VOR RWY 19, Amdt 15 Oscoda, MI, Oscoda-Wurtsmith, RNAV...

  3. 76 FR 23208 - Alternative to Minimum Days Off Requirements

    Science.gov (United States)

    2011-04-26

    ... Language X. Voluntary Consensus Standards XI. Finding of No Significant Environmental Impact XII. Paperwork... the Current Fitness for Duty Requirements On September 3, 2010, the Nuclear Energy Institute (NEI... to the minimum days off requirements considered the collective advantages and disadvantages of having...

  4. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  5. 78 FR 78714 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-12-27

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...

  6. 76 FR 52237 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-08-22

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... SIAPs, their complex nature, and the need for a special format make their verbatim publication in the...

  7. 78 FR 64168 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-10-28

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...

  8. 78 FR 64170 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-10-28

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...

  9. 77 FR 37799 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-06-25

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...

  10. 77 FR 51896 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-08-28

    ..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...

  11. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  12. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  13. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  14. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  15. 77 FR 5693 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-02-06

    ... Takeoff Minimums and ODPs are available online free of charge. Visit http://www.nfdc.faa.gov to register... Houston, TX, Ellington Field, TACAN RWY 35L, Orig Effective 8 MAR 2012 Wilmington, DE, New Castle, ILS OR...

  16. 6 CFR 27.204 - Minimum concentration by security issue.

    Science.gov (United States)

    2010-01-01

    ... Section 27.204 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.204 Minimum concentration by security issue. (a) Release Chemicals—(1) Release-Toxic Chemicals. If a release-toxic chemical of interest...

  17. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  18. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  19. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  20. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  1. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  2. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  3. 77 FR 9169 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-02-16

    ... sections and specifies the types of SIAPs and the effective dates of the associated Takeoff Minimums and... Dallas, TX, Collin County Rgnl at McKinney, ILS OR LOC RWY 17, Amdt 3A Dallas, TX, Dallas Love Field, ILS...

  4. 78 FR 28135 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-05-14

    ... the affected CFR sections and specifies the types of SIAPs and the effective dates of the, associated..., Takeoff Minimums and Obstacle DP, Amdt 2 Dallas, TX, Dallas Love Field, ILS OR LOC RWY 31R, ILS RWY 31R...

  5. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  6. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  7. Compliance with minimum information guidelines in public metabolomics repositories.

    Science.gov (United States)

    Spicer, Rachel A; Salek, Reza; Steinbeck, Christoph

    2017-09-26

    The Metabolomics Standards Initiative (MSI) guidelines were first published in 2007. These guidelines provided reporting standards for all stages of metabolomics analysis: experimental design, biological context, chemical analysis and data processing. Since 2012, a series of public metabolomics databases and repositories, which accept the deposition of metabolomic datasets, have arisen. In this study, the compliance of 399 public data sets, from four major metabolomics data repositories, to the biological context MSI reporting standards was evaluated. None of the reporting standards were complied with in every publicly available study, although adherence rates varied greatly, from 0 to 97%. The plant minimum reporting standards were the most complied with and the microbial and in vitro were the least. Our results indicate the need for reassessment and revision of the existing MSI reporting standards.

  8. The minimum yield in channeling

    International Nuclear Information System (INIS)

    Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.

    2000-01-01

    A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation

  9. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  10. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  11. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  12. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    Science.gov (United States)

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (ppathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  13. Are There Long-Run Effects of the Minimum Wage?

    Science.gov (United States)

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  14. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  15. Setting a minimum age for juvenile justice jurisdiction in California.

    Science.gov (United States)

    S Barnert, Elizabeth; S Abrams, Laura; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka

    2017-03-13

    Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one.

  16. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; Harmon-Smith, Miranda; Doud, Devin; Reddy, T. B. K.; Schulz, Frederik; Jarett, Jessica; Rivers, Adam R.; Eloe-Fadrosh, Emiley A.; Tringe, Susannah G.; Ivanova, Natalia N.; Copeland, Alex; Clum, Alicia; Becraft, Eric D.; Malmstrom, Rex R.; Birren, Bruce; Podar, Mircea; Bork, Peer; Weinstock, George M.; Garrity, George M.; Dodsworth, Jeremy A.; Yooseph, Shibu; Sutton, Granger; Glöckner, Frank O.; Gilbert, Jack A.; Nelson, William C.; Hallam, Steven J.; Jungbluth, Sean P.; Ettema, Thijs J. G.; Tighe, Scott; Konstantinidis, Konstantinos T.; Liu, Wen-Tso; Baker, Brett J.; Rattei, Thomas; Eisen, Jonathan A.; Hedlund, Brian; McMahon, Katherine D.; Fierer, Noah; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Tyson, Gene W.; Rinke, Christian; Kyrpides, Nikos C.; Schriml, Lynn; Garrity, George M.; Hugenholtz, Philip; Sutton, Granger; Yilmaz, Pelin; Meyer, Folker; Glöckner, Frank O.; Gilbert, Jack A.; Knight, Rob; Finn, Rob; Cochrane, Guy; Karsch-Mizrachi, Ilene; Lapidus, Alla; Meyer, Folker; Yilmaz, Pelin; Parks, Donovan H.; Eren, A. M.; Schriml, Lynn; Banfield, Jillian F.; Hugenholtz, Philip; Woyke, Tanja

    2017-08-08

    The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as a supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.

  17. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...

  18. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  19. 77 FR 50014 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-08-20

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 97 [Docket No. 30856... advantages of incorporation by reference are realized and publication of the complete description of each.... Intl. 20-Sep-12 TX Houston Sugar Land Rgnl.... 2/8058 7/19/12 TAKEOFF MINIMUMS AND (OBSTACLE) DP, Amdt...

  20. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  1. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  2. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  3. Sampling Variances and Covariances of Parameter Estimates in Item Response Theory.

    Science.gov (United States)

    1982-08-01

    substituting (15) into (16) and solving for k and K k = b b1 - o K , (17)k where b and b are means for m and r items, respectively. To find the variance...C5 , and C12 were treated as known. We find that the standard errors of B1 to B5 are increased drastically by ignorance of C 1 to C5 ; all...ERIC Facilltv-Acquisitlons Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC 27514 Bethesda, MD 20014 -7- Dr. A. J. Eschenbrenner 1 Dr. John R

  4. 76 FR 25232 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-05-04

    ... a Flight Data Center (FDC) Notice to Airmen (NOTAM) as an emergency action of immediate flight... existing or anticipated at the affected airports. Because of the close and immediate relationship between... Anchorage, AK, Merill Field, Takeoff Minimums and Obstacle DP, Amdt 1 Big Lake, AK, Big Lake, RNAV (GPS) RWY...

  5. 78 FR 59808 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-09-30

    ... SIAPs, Takeoff Minimums or ODPs, but instead refer to their depiction on charts printed by publishers of aeronautical materials. The advantages of incorporation by reference are realized and publication of the... Administration (NARA). For information on the availability of this material at NARA, call 202-741- 6030, or go to...

  6. 77 FR 71495 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-12-03

    ... SIAPs, Takeoff Minimums or ODPs, but instead refer to their depiction on charts printed by publishers of aeronautical materials. The advantages of incorporation by reference are realized and publication of the... Administration (NARA). For information on the availability of this material at NARA, call 202-741- 6030, or go to...

  7. 78 FR 43782 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-07-22

    ..., CANCELED Kalamazoo, MI, Kalamazoo/Battle Creek Intl, RNAV (GPS) RWY 17, Amdt 1 Traverse City, MI, Cherry Capitol, GPS RWY 36, Orig-B, CANCELED Traverse City, MI, Cherry Capitol, RNAV (GPS) RWY 36, Orig Dodge..., Cassville Muni, Takeoff Minimums and Obstacle DP, Amdt 1 Fredericktown, MO, A. Paul Vance Fredericktown Rgnl...

  8. 76 FR 55233 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-09-07

    ... as the anticipated impact is so minimal. For the same reason, the FAA certifies that this amendment will not have a significant economic impact on a substantial number of small entities under the... Rgnl, Takeoff Minimums and Obstacle DP, Orig Beaver Falls, PA, Beaver County, LOC RWY 10, Amdt 4...

  9. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  10. The Minimum Core for Language and Literacy Audit and Test

    CERN Document Server

    Machin, Lynn

    2007-01-01

    This book supports trainee teachers in the Lifelong Learning Sector in the assessment of their literacy knowledge. A self-audit section is included to help trainees understand their level of competence and confidence in literacy and will help them identify any gaps in their knowledge and skills. This is followed by exercises and activities to support and enhance learning. The book covers all the content of the LLUK standards for the minimum core for literacy. Coverage and assessment of the minimum core have to be embedded in all Certificate and Diploma courses leading to QTLS and ATLS status.

  11. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  12. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  13. 75 FR 69331 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-11-12

    ...;Prices of new books are listed in the first FEDERAL REGISTER issue of each #0;week. #0; #0; #0; #0;#0..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... special format make publication in the Federal Register expensive and impractical. Furthermore, airmen do...

  14. 75 FR 45047 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-08-02

    ...;Prices of new books are listed in the first FEDERAL REGISTER issue of each #0;week. #0; #0; #0; #0;#0..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... for a special format make publication in the Federal Register expensive and impractical. Furthermore...

  15. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  16. 40 CFR 262.104 - What are the minimum performance criteria?

    Science.gov (United States)

    2010-07-01

    ... words “laboratory waste” or with the chemical name of the contents. If the container is too small to...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste...

  17. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  18. 78 FR 32088 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-05-29

    ... amendments may have been issued previously by the FAA in a Flight Data Center (FDC) Notice to Airmen (NOTAM... close and immediate relationship between these SIAPs, Takeoff Minimums and ODPs, and safety in air.../Intl, RNAV (RNP) Z RWY 21, Orig-A Big Spring, TX, Big Spring Mc Mahon-Wrinkle, RNAV (GPS) RWY 6, Orig...

  19. 76 FR 37265 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-06-27

    ..., AL, Richard Arthur Field, RNAV (GPS) RWY 36, Amdt 1 Gulf Shores, AL, Jack Edwards, VOR-A, Amdt 3, CANCELLED Crossett, AR, Z M Jack Stell Field, Takeoff Minimums and Obstacle DP, Orig Springerville, AZ..., RNAV (GPS) RWY 10, Amdt 1 Ashland, KY, Ashland Rgnl, RNAV (GPS) RWY 28, Amdt 1 Nantucket, MA, Nantucket...

  20. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  1. Diagnosis of Bearing System using Minimum Variance Cepstrum

    International Nuclear Information System (INIS)

    Lee, Jeong Han; Choi, Young Chul; Park, Jin Ho; Lee, Won Hyung; Kim, Chan Joong

    2005-01-01

    Various bearings are commonly used in rotating machines. The noise and vibration signals that can be obtained from the machines often convey the information of faults and these locations. Monitoring conditions for bearings have received considerable attention for many years, because the majority of problems in rotating machines are caused by faulty bearings. Thus failure alarm for the bearing system is often based on the detection of the onset of localized faults. Many methods are available for detecting faults in the bearing system. The majority of these methods assume that faults in bearings produce impulses. Impulse events can be attributed to bearing faults in the system. McFadden and Smith used the bandpass filter to filter the noise signal and then obtained the envelope by using the envelope detector. D. Ho and R. B Randall also tried envelope spectrum to detect faults in the bearing system, but it is very difficult to find resonant frequency in the noisy environments. S. -K. Lee and P. R. White used improved ANC (adaptive noise cancellation) to find faults. The basic idea of this technique is to remove the noise from the measured vibration signal, but they are not able to show the theoretical foundation of the proposed algorithms. Y.-H. Kim et al. used a moving window. This algorithm is quite powerful in the early detection of faults in a ball bearing system, but it is difficult to decide initial time and step size of the moving window. The early fault signal that is caused by microscopic cracks is commonly embedded in noise. Therefore, the success of detecting fault signal is completely determined by a method's ability to distinguish signal and noise. In 1969, Capon coined maximum likelihood (ML) spectra which estimate a mixed spectrum consisting of line spectrum, corresponding to a deterministic random process, plus arbitrary unknown continuous spectrum. The unique feature of these spectra is that it can detect sinusoidal signal from noise. Our idea essentially comes from this method. In this paper, a technique, which can detect impulse embedded in noise, is introduced. The theory of this technique is derived and the improved ability to detect the faults in a ball bearing system is demonstrated theoretically as well as experimentally

  2. Calculation of Appropriate Minimum Size of Isolation Rooms based on Questionnaire Survey of Experts and Analysis on Conditions of Isolation Room Use

    Science.gov (United States)

    Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha

    2017-07-01

    After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.

  3. Proposed English Standards Promote Aviation Safety.

    Science.gov (United States)

    Chatham, Robert L.; Thomas, Shelley

    2000-01-01

    Discusses the International Civil Aviation Organization's (ICAO) Air Navigation's Commission approval of a task to develop minimum skill level requirements in English for air traffic control. The ICAO collaborated with the Defense Language Institute English Language Center to propose a minimum standard for English proficiency for international…

  4. Setting a minimum age for juvenile justice jurisdiction in California

    Science.gov (United States)

    Barnert, Elizabeth S.; Abrams, Laura S.; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka

    2018-01-01

    Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one. Paper type Conceptual paper PMID:28299968

  5. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  6. 78 FR 50324 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-08-19

    ... amendments may have been issued previously by the FAA in a Flight Data Center (FDC) Notice to Airmen (NOTAM... close and immediate relationship between these SIAPs, Takeoff Minimums and ODPs, and safety in air..., Melbourne Muni--John E Miller Field, RNAV (GPS) RWY 3, Amdt 1A Big Bear City, CA, Big Bear City, RNAV (GPS...

  7. 76 FR 78812 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-12-20

    ... amendments may have been issued previously by the FAA in a Flight Data Center (FDC) Notice to Airmen (NOTAM... close and immediate relationship between these SIAPs, Takeoff Minimums and ODPs, and safety in air..., Amdt 13A, CANCELLED Big Lake, AK, Big Lake, RNAV (GPS) RWY 7, Amdt 1 Big Lake, AK, Big Lake, RNAV (GPS...

  8. 76 FR 6053 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-02-03

    ... Minimums and Obstacle DP, Amdt 1 Hattiesburg, MS, Hattiesburg Bobby L Chain Muni, RNAV (GPS) Y RWY 13, Amdt 2 Hattiesburg, MS, Hattiesburg Bobby L Chain Muni, RNAV (GPS) Z RWY 13, Amdt 1 Wadesboro, NC, Anson.... Part 97 is amended to read as follows: Effective 10 MAR 2011 Hayward, CA, Hayward Executive, VOR OR GPS...

  9. 75 FR 4488 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-01-28

    ... (GPS) RWY 30L, Amdt 2 Wolf Point, MT, L.M.Clayton, Takeoff Minimums and Obstacle DP, Orig [[Page 4490..., Stockton Metropolitan, ILS OR LOC RWY 29R, Amdt 19 Avon Park, FL, Avon Park Executive, GPS RWY 4, Orig-A, CANCELLED Avon Park, FL, Avon Park Executive, GPS RWY 9, Orig-A, CANCELLED Avon Park, FL, Avon Park...

  10. 77 FR 41668 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-07-16

    ... Obstacle DP, Amdt 2 Swainsboro, GA, Emanuel County, VOR/DME-A, Amdt 3, CANCELED Tifton, GA, Henry Tift... Minimums and Obstacle DP, Amdt 1 Pensacola, FL, Pensacola Gulf Coast Rgnl, VOR RWY 8, Amdt 4 Augusta, GA, Augusta Rgnl at Bush Field, ILS OR LOC RWY 17, Amdt 9 Augusta, GA, Augusta Rgnl at Bush Field, ILS OR LOC...

  11. Analytical review of minimum critical mass values for selected uranium and plutonium materials

    International Nuclear Information System (INIS)

    Morman, J.A.; Henrikson, D.J.; Garcia, A.S.

    1997-01-01

    Current subcritical limits for a number of uranium and plutonium materials (metals and compounds) as given in the ANSI/ANS standards for criticality safety are based on evaluations performed in the late 1970s and early 1980s. This paper presents the results of an analytical study of the minimum critical mass values for a set of materials using current codes and standard cross section sets. This work is meant to produce a consistent set of minimum critical mass values that can form the basis for adding new materials to the single-parameter tables in ANSI/ANS-8.1. Minimum critical mass results are presented for bare and water reflected full-density spheres and for full density moist (1.5 wt-% water) as calculated with KENO-Va, MCNP4A and ONEDANT. Calculations were also performed for both dry and moist materials at one-half density. Some KENO calculations were repeated using several cross section sets to examine potential bias differences. The results of the calculations were compared to the currently accepted subcritical limits. The calculated minimum critical mass values are reasonably consistent for the three codes, and differences most likely reflect differences in the cross section sets. The results are also consistent with values given in ANSI/ANS-8.1. 3 refs., 2 tabs

  12. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  13. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  14. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  15. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  16. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    Science.gov (United States)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  17. Implementation Of The Local Minimum Wage In Malang City (A Case Study in Malang City 2014

    Directory of Open Access Journals (Sweden)

    Dhea Candra Dewi Candra Dewi

    2015-04-01

    Full Text Available Wage system in a framework of how wages set and defined in order to improve the welfare of worker. The Indonesian government attempt to set a minimum wage in accordance with the eligibility standard of living. The study intend to analize the policy of Local Minimum Wage in Malang City in 2014, its implementation and constraining factors of those Local Minimum Wages. The research uses interactive model analysis as introduced by Miles and Hubermann [6] that consist of data collection, data reduction, data display, and conclusion. Constraining factors seen at the respond given by relevant actors to the policy such as employer organizations, worker unions, wage councils, and local government. Firstly, company as employer organization does not use wage scale system as suggested by the policy. Secondly, lack of communication forum between company and worker union sounds very high. Thirdly, inability of small and big companies to pay minimum standard wages. Lastly, disagreement and different opinion about wage scale applied between local wage council, employer organization and workers union that often occurs in tripartite communication forum.     Keywords: Employers Organization, Local Minimum Wage, Local Wage Council, Policy Implementation, Tripartite Communication forum, Workers Union.

  18. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. The minimum information about a genome sequence (MIGS) specification

    DEFF Research Database (Denmark)

    Field, D; Garrity, G; Gray, T

    2008-01-01

    With the quantity of genomic data increasing at an exponential rate, it is imperative that these data be captured electronically, in a standard format. Standardization activities must proceed within the auspices of open-access and international working bodies. To tackle the issues surrounding the...... that will be required to develop improved mechanisms of metadata capture and exchange. As part of its wider goals, the GSC also supports improving the 'transparency' of the information contained in existing genomic databases....... the development of better descriptions of genomic investigations, we have formed the Genomic Standards Consortium (GSC). Here, we introduce the minimum information about a genome sequence (MIGS) specification with the intent of promoting participation in its development and discussing the resources...

  20. Capacity limitations to extract the mean emotion from multiple facial expressions depend on emotion variance.

    Science.gov (United States)

    Ji, Luyan; Pourtois, Gilles

    2018-04-20

    We examined the processing capacity and the role of emotion variance in ensemble representation for multiple facial expressions shown concurrently. A standard set size manipulation was used, whereby the sets consisted of 4, 8, or 16 morphed faces each uniquely varying along a happy-angry continuum (Experiment 1) or a neutral-happy/angry continuum (Experiments 2 & 3). Across the three experiments, we reduced the amount of emotion variance in the sets to explore the boundaries of this process. Participants judged the perceived average emotion from each set on a continuous scale. We computed and compared objective and subjective difference scores, using the morph units and post-experiment ratings, respectively. Results of the subjective scores were more consistent than the objective ones across the first two experiments where the variance was relatively large, and revealed each time that increasing set size led to a poorer averaging ability, suggesting capacity limitations in establishing ensemble representations for multiple facial expressions. However, when the emotion variance in the sets was reduced in Experiment 3, both subjective and objective scores remained unaffected by set size, suggesting that the emotion averaging process was unlimited in these conditions. Collectively, these results suggest that extracting mean emotion from a set composed of multiple faces depends on both structural (attentional) and stimulus-related effects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  3. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  4. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  5. 76 FR 52239 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-08-22

    ..., Takeoff Minimums and Obstacle DP, Orig East Troy, WI, East Troy Muni, GPS RWY 8, Orig, CANCELLED East Troy, WI, East Troy Muni, GPS RWY 26, Orig, CANCELLED East Troy, WI, East Troy Muni, RNAV (GPS) RWY 8, Orig East Troy, WI, East Troy Muni, RNAV (GPS) RWY 26, Orig East Troy, WI, East Troy Muni, VOR/DME-A, Amdt 1...

  6. 75 FR 39152 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-07-08

    ... Jackson, AL, Jackson Muni, RNAV (GPS) RWY 19, Orig Troy, AL, Troy Muni, ILS OR LOC RWY 7, Amdt 9 Troy, AL, Troy Muni, RNAV (GPS) RWY 7, Amdt 1 Troy, AL, Troy Muni, RNAV (GPS) RWY 25, Amdt 1 Vernon, AL, Lamar..., Amdt 1 East Troy, WI, East Troy Muni, Takeoff Minimums and Obstacle DP, Orig On June 09, 2010 (75 FR...

  7. 78 FR 34559 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-06-10

    ... specifies the types of SIAPs and the effective dates of the, associated Takeoff Minimums and ODPs. This... I), ILS RWY 13R (SA CAT II), Amdt 9 Dallas, TX, Dallas Love Field, ILS OR LOC Y RWY 13L, Amdt 32 Dallas, TX, Dallas Love Field, RNAV (GPS) Y RWY 13L, Amdt 1 Dallas, TX, Dallas Love Field, RNAV (GPS) Z...

  8. 77 FR 31180 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2012-05-25

    ... sections and specifies the types of SIAPs and the effective dates of the associated Takeoff Minimums and... Co Rgnl, ILS OR LOC RWY 29, Amdt 1A Dallas, TX, Dallas Love Field, RNAV (GPS) RWY 31L, Amdt 1A Dallas, TX, Dallas Love Field, RNAV (GPS) RWY 31R, Amdt 1A Effective 26 JULY 2012 Talkeetna, AK, Talkeetna...

  9. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  10. Reliability, standard error, and minimum detectable change of clinical pressure pain threshold testing in people with and without acute neck pain.

    Science.gov (United States)

    Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren

    2011-09-01

    Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.

  11. Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)

    1998-03-01

    Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)

  12. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  13. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  14. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    Purpose: To analyze treatment delivery variances for 3-D conformal therapy performed at various levels of treatment delivery automation, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system. Materials and Methods: All external beam treatments performed in our department during six months of 1996 were analyzed to study treatment delivery variances versus treatment complexity. Treatments for 505 patients (40,641 individual treatment ports) on four treatment machines were studied. All treatment variances noted by treatment therapists or quality assurance reviews (39 in all) were analyzed. Machines 'M1' (CLinac (6(100))) and 'M2' (CLinac 1800) were operated in a standard manual setup mode, with no record and verify system (R/V). Machines 'M3' (CLinac 2100CD/MLC) and ''M4'' (MM50 racetrack microtron system with MLC) treated patients under the control of a computer-controlled conformal radiotherapy system (CCRS) which 1) downloads the treatment delivery plan from the planning system, 2) performs some (or all) of the machine set-up and treatment delivery for each field, 3) monitors treatment delivery, 4) records all treatment parameters, and 5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3, so it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments (ports), non-axial and non-coplanar plans, multi-segment intensity modulation, and pseudo-isocentric treatments (and other plans with computer-controlled table motions). Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines, so this analysis

  15. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  16. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  17. Minimum airflow reset of single-duct VAV terminal boxes

    Science.gov (United States)

    Cho, Young-Hum

    Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  19. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  20. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  1. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  2. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  3. Solid phase stability of a double-minimum interaction potential system

    International Nuclear Information System (INIS)

    Suematsu, Ayumi; Yoshimori, Akira; Saiki, Masafumi; Matsui, Jun; Odagaki, Takashi

    2014-01-01

    We study phase stability of a system with double-minimum interaction potential in a wide range of parameters by a thermodynamic perturbation theory. The present double-minimum potential is the Lennard-Jones-Gauss potential, which has a Gaussian pocket as well as a standard Lennard-Jones minimum. As a function of the depth and position of the Gaussian pocket in the potential, we determine the coexistence pressure of crystals (fcc and bcc). We show that the fcc crystallizes even at zero pressure when the position of the Gaussian pocket is coincident with the first or third nearest neighbor site of the fcc crystal. The bcc crystal is more stable than the fcc crystal when the position of the Gaussian pocket is coincident with the second nearest neighbor sites of the bcc crystal. The stable crystal structure is determined by the position of the Gaussian pocket. These results show that we can control the stability of the solid phase by tuning the potential function

  4. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  5. The minimum information about a genome sequence (MIGS) specification

    Science.gov (United States)

    Field, Dawn; Garrity, George; Gray, Tanya; Morrison, Norman; Selengut, Jeremy; Sterk, Peter; Tatusova, Tatiana; Thomson, Nicholas; Allen, Michael J; Angiuoli, Samuel V; Ashburner, Michael; Axelrod, Nelson; Baldauf, Sandra; Ballard, Stuart; Boore, Jeffrey; Cochrane, Guy; Cole, James; Dawyndt, Peter; De Vos, Paul; dePamphilis, Claude; Edwards, Robert; Faruque, Nadeem; Feldman, Robert; Gilbert, Jack; Gilna, Paul; Glöckner, Frank Oliver; Goldstein, Philip; Guralnick, Robert; Haft, Dan; Hancock, David; Hermjakob, Henning; Hertz-Fowler, Christiane; Hugenholtz, Phil; Joint, Ian; Kagan, Leonid; Kane, Matthew; Kennedy, Jessie; Kowalchuk, George; Kottmann, Renzo; Kolker, Eugene; Kravitz, Saul; Kyrpides, Nikos; Leebens-Mack, Jim; Lewis, Suzanna E; Li, Kelvin; Lister, Allyson L; Lord, Phillip; Maltsev, Natalia; Markowitz, Victor; Martiny, Jennifer; Methe, Barbara; Mizrachi, Ilene; Moxon, Richard; Nelson, Karen; Parkhill, Julian; Proctor, Lita; White, Owen; Sansone, Susanna-Assunta; Spiers, Andrew; Stevens, Robert; Swift, Paul; Taylor, Chris; Tateno, Yoshio; Tett, Adrian; Turner, Sarah; Ussery, David; Vaughan, Bob; Ward, Naomi; Whetzel, Trish; Gil, Ingio San; Wilson, Gareth; Wipat, Anil

    2008-01-01

    With the quantity of genomic data increasing at an exponential rate, it is imperative that these data be captured electronically, in a standard format. Standardization activities must proceed within the auspices of open-access and international working bodies. To tackle the issues surrounding the development of better descriptions of genomic investigations, we have formed the Genomic Standards Consortium (GSC). Here, we introduce the minimum information about a genome sequence (MIGS) specification with the intent of promoting participation in its development and discussing the resources that will be required to develop improved mechanisms of metadata capture and exchange. As part of its wider goals, the GSC also supports improving the ‘transparency’ of the information contained in existing genomic databases. PMID:18464787

  6. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  7. Standardization of a broth microdilution susceptibility testing method to determine minimum inhibitory concentrations of aquatic bacteria

    DEFF Research Database (Denmark)

    Miller, R.A.; Walker, R.D.; Carson, J.

    2005-01-01

    (ampicillin, enrofloxacin, erythromycin, florfenicol, flumequine, gentamicin, ormetoprim/sulfadimethoxine, oxolinic acid, oxytetracycline and trimethoprim/sulfamethoxazole). Minimum inhibitory concentration (MIC) QC ranges were determined using dry- and frozen-form 96-well plates and cation-adjusted Mueller...

  8. Why We Should Establish a National System of Standards.

    Science.gov (United States)

    Hennen, Thomas J., Jr.

    2000-01-01

    Explains the need to establish a national system of standards for public libraries. Discusses local standards, state standards, and international standards, and suggests adopting a tiered approach including three levels: minimum standards; target standards; and benchmarking standards, as found in total quality management. (LRW)

  9. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  10. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  11. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  12. 75 FR 55961 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2010-09-15

    .../1/10 LOC RWY 8, AMDT 5A. 21-Oct-10 AZ FORT HUACHUCA/ SIERRA VISTA MUNI- 0/5486 8/30/10 TAKEOFF MINIMUMS AND OBSTACLE DP, AMDT 2. SIERRA VISTA. LIBBY AAF. 21-Oct-10 LA LAKE CHARLES..... LAKE CHARLES RGNL... LOC/DME RWY 14, AMDT 8. 21-Oct-10 LA SLIDELL SLIDELL 0/7403 8/30/10 NDB RWY 36, ORIG-D. 21-Oct-10 FL...

  13. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  14. 25 CFR 256.22 - How can I be sure that the work that is being done on my dwelling meets minimum construction...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false How can I be sure that the work that is being done on my dwelling meets minimum construction standards? 256.22 Section 256.22 Indians BUREAU OF INDIAN AFFAIRS... is being done on my dwelling meets minimum construction standards? (a) At various stages of...

  15. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  16. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  17. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  18. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  19. The minimum sit-to-stand height test: reliability, responsiveness and relationship to leg muscle strength.

    Science.gov (United States)

    Schurr, Karl; Sherrington, Catherine; Wallbank, Geraldine; Pamphlett, Patricia; Olivetti, Lynette

    2012-07-01

    To determine the reliability of the minimum sit-to-stand height test, its responsiveness and its relationship to leg muscle strength among rehabilitation unit inpatients and outpatients. Reliability study using two measurers and two test occasions. Secondary analysis of data from two clinical trials. Inpatient and outpatient rehabilitation services in three public hospitals. Eighteen hospital patients and five others participated in the reliability study. Seventy-two rehabilitation unit inpatients and 80 outpatients participated in the clinical trials. The minimum sit-to-stand height test was assessed using a standard procedure. For the reliability study, a second tester repeated the minimum sit-to-stand height test on the same day. In the inpatient clinical trial the measures were repeated two weeks later. In the outpatient trial the measures were repeated five weeks later. Knee extensor muscle strength was assessed in the clinical trials using a hand-held dynamometer. The reliability for the minimum sit-to-stand height test was excellent (intraclass correlation coefficient (ICC) 0.91, 95% confidence interval (CI) 0.81-0.96). The standard error of measurement was 34 mm. Responsiveness was moderate in the inpatient trial (effect size: 0.53) but small in the outpatient trial (effect size: 0.16). A small proportion (8-17%) of variability in minimum sit-to-stand height test was explained by knee extensor muscle strength. The minimum sit-to-stand height test has excellent reliability and moderate responsiveness in an inpatient rehabilitation setting. Responsiveness in an outpatient rehabilitation setting requires further investigation. Performance is influenced by factors other than knee extensor muscle strength.

  20. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  1. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    Science.gov (United States)

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  2. The variance of length of stay and the optimal DRG outlier payments.

    Science.gov (United States)

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  3. 78 FR 59810 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2013-09-30

    ... Minimums and (Obstacle) DP, Amdt 5 10/17/13 FL Orlando Orlando Intl........ 3/7948 9/10/13 ILS RWY 17R (CAT II), Amdt 5B 10/17/13 FL Orlando Orlando Intl........ 3/7952 9/10/13 ILS RWY 17L (CAT II), Amdt 1B 10/17/13 FL Orlando Orlando Intl........ 3/7956 9/10/13 RNAV (GPS) RWY 17L, Orig-A 10/17/13 FL Orlando...

  4. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  5. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  6. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  7. [The influence of "hygienic minimum" course on quality of catering establishments].

    Science.gov (United States)

    Venus, Miroslav; Petrovcić, Darija

    2010-01-01

    The aim of this article was to define the quality of catering establishments in Virovitica Podravina county before and after the course of "hygienic minimum". Research was realized through interview and assessment of microbiological swabs of the same catering establishments before and after the course of "hygienic minimum". All procedures were performed according to Regulations on standard specification in microbiological cleanness and methods of its determining. Twenty-five catering establishments from a group of restaurants and bars were analyzed. In all of them we found improvement in the most of examined parameters. So, implementation of the course through the existing program has proven to be justified.

  8. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  9. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  10. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  11. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  12. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    International Nuclear Information System (INIS)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro

    1998-03-01

    ''MCNP Use Experience'' Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year''s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile ''Guideline of Monte Carlo Calculation'' which will be a standard in the future. The appendices of this report include this ''Guideline'', the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  13. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  14. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  15. Genetic and environmental variances of bone microarchitecture and bone remodeling markers: a twin study.

    Science.gov (United States)

    Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego

    2015-03-01

    All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ  = 0.49) than DZ (rDZ  = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than

  16. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  17. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  18. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  19. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  20. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  1. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    Science.gov (United States)

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  2. Directly measuring mean and variance of infinite-spectrum observables such as the photon orbital angular momentum.

    Science.gov (United States)

    Piccirillo, Bruno; Slussarenko, Sergei; Marrucci, Lorenzo; Santamato, Enrico

    2015-10-19

    The standard method for experimentally determining the probability distribution of an observable in quantum mechanics is the measurement of the observable spectrum. However, for infinite-dimensional degrees of freedom, this approach would require ideally infinite or, more realistically, a very large number of measurements. Here we consider an alternative method which can yield the mean and variance of an observable of an infinite-dimensional system by measuring only a two-dimensional pointer weakly coupled with the system. In our demonstrative implementation, we determine both the mean and the variance of the orbital angular momentum of a light beam without acquiring the entire spectrum, but measuring the Stokes parameters of the optical polarization (acting as pointer), after the beam has suffered a suitable spin-orbit weak interaction. This example can provide a paradigm for a new class of useful weak quantum measurements.

  3. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  4. Searching for concentric low variance circles in the cosmic microwave background

    Energy Technology Data Exchange (ETDEWEB)

    DeAbreu, Adam [Department of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6 Canada (Canada); Contreras, Dagoberto; Scott, Douglas, E-mail: adeabreu@sfu.ca, E-mail: dagocont@phas.ubc.ca, E-mail: dscott@phas.ubc.ca [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z1 Canada (Canada)

    2015-12-01

    In a recent paper, Gurzadyan and Penrose claim to have found directions in the sky around which there are multiple concentric sets of annuli with anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular theory of the pre-Big Bang Universe. We are able to reproduce the analysis these authors presented for data from the WMAP satellite and we confirm the existence of these apparently special directions in the newer Planck data. However, we also find that these features are present at the same level of abundance in simulated Gaussian CMB skies, i.e., they are entirely consistent with the predictions of the standard cosmological model.

  5. Core Outcome Set-STAndards for Development: The COS-STAD recommendations.

    Directory of Open Access Journals (Sweden)

    Jamie J Kirkham

    2017-11-01

    Full Text Available The use of core outcome sets (COS ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR].An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers, and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1-9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process.The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using

  6. Core Outcome Set-STAndards for Development: The COS-STAD recommendations.

    Science.gov (United States)

    Kirkham, Jamie J; Davis, Katherine; Altman, Douglas G; Blazeby, Jane M; Clarke, Mike; Tunis, Sean; Williamson, Paula R

    2017-11-01

    The use of core outcome sets (COS) ensures that researchers measure and report those outcomes that are most likely to be relevant to users of their research. Several hundred COS projects have been systematically identified to date, but there has been no formal quality assessment of these studies. The Core Outcome Set-STAndards for Development (COS-STAD) project aimed to identify minimum standards for the design of a COS study agreed upon by an international group, while other specific guidance exists for the final reporting of COS development studies (Core Outcome Set-STAndards for Reporting [COS-STAR]). An international group of experienced COS developers, methodologists, journal editors, potential users of COS (clinical trialists, systematic reviewers, and clinical guideline developers), and patient representatives produced the COS-STAD recommendations to help improve the quality of COS development and support the assessment of whether a COS had been developed using a reasonable approach. An open survey of experts generated an initial list of items, which was refined by a 2-round Delphi survey involving nearly 250 participants representing key stakeholder groups. Participants assigned importance ratings for each item using a 1-9 scale. Consensus that an item should be included in the set of minimum standards was defined as at least 70% of the voting participants from each stakeholder group providing a score between 7 and 9. The Delphi survey was followed by a consensus discussion with the study management group representing multiple stakeholder groups. COS-STAD contains 11 minimum standards that are the minimum design recommendations for all COS development projects. The recommendations focus on 3 key domains: the scope, the stakeholders, and the consensus process. The COS-STAD project has established 11 minimum standards to be followed by COS developers when planning their projects and by users when deciding whether a COS has been developed using reasonable

  7. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  8. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  9. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  10. Discussion on the Standardization of Shielding Materials — Sensitivity Analysis of Material Compositions

    Directory of Open Access Journals (Sweden)

    Ogata Tomohiro

    2017-01-01

    Full Text Available The overview of standardization activities for shielding materials is described. We propose a basic approach for standardizing material composition used in radiation shielding design for nuclear and accelerator facilities. We have collected concrete composition data from actual concrete samples to organize a representative composition and its variance data. Then the sensitivity analysis of the composition variance has been performed through a simple 1-D dose calculation. Recent findings from the analysis are summarized.

  11. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  12. Minimum Wage and Maximum Hours Standards Under the Fair Labor Standards Act. Economic Effects Studies.

    Science.gov (United States)

    Wage and Labor Standards Administration (DOL), Washington, DC.

    This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…

  13. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  14. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  15. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  16. Determining the global minimum of Higgs potentials via Groebner bases - applied to the NMSSM

    International Nuclear Information System (INIS)

    Maniatis, M.; Manteuffel, A. von; Nachtmann, O.

    2007-01-01

    Determining the global minimum of Higgs potentials with several Higgs fields like the next-to-minimal supersymmetric extension of the standard model (NMSSM) is a non-trivial task already at the tree level. The global minimum of a Higgs potential can be found from the set of all its stationary points defined by a multivariate polynomial system of equations. We introduce here the algebraic Groebner basis approach to solve this system of equations. We apply the method to the NMSSM with CP-conserving as well as CP-violating parameters. The results reveal an interesting stationary-point structure of the potential. Requiring the global minimum to give the electroweak symmetry breaking observed in Nature excludes large parts of the parameter space. (orig.)

  17. Determining the global minimum of Higgs potentials via Groebner bases - applied to the NMSSM

    Energy Technology Data Exchange (ETDEWEB)

    Maniatis, M.; Manteuffel, A. von; Nachtmann, O. [Institut fuer Theoretische Physik, Heidelberg (Germany)

    2007-03-15

    Determining the global minimum of Higgs potentials with several Higgs fields like the next-to-minimal supersymmetric extension of the standard model (NMSSM) is a non-trivial task already at the tree level. The global minimum of a Higgs potential can be found from the set of all its stationary points defined by a multivariate polynomial system of equations. We introduce here the algebraic Groebner basis approach to solve this system of equations. We apply the method to the NMSSM with CP-conserving as well as CP-violating parameters. The results reveal an interesting stationary-point structure of the potential. Requiring the global minimum to give the electroweak symmetry breaking observed in Nature excludes large parts of the parameter space. (orig.)

  18. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  19. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  20. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    , a 7 MHz, 128-element, phased array transducer with lambda/2-spacing was used. Data is obtained using a single element as the transmitting aperture and all 128 elements as the receiving aperture. A full SA sequence consisting of 128 emissions was simulated by gliding the active transmitting element...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...... across the array. Data for 13 point targets and a circular cyst with a radius of 5 mm were simulated. The performance of the MV beamformer is compared to DS using boxcar weights and Hanning weights, and is quantified by the Full Width at Half Maximum (FWHM) and the peak-side-lobe level (PSL). Single...

  1. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

  2. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  3. 25 CFR 36.41 - Standard XIV-Textbooks.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Standard XIV-Textbooks. 36.41 Section 36.41 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Instructional Support § 36.41 Standard XIV—Textbooks. (a) Each school shal...

  4. 34 CFR 366.63 - What evidence must a center present to demonstrate that it is in minimum compliance with the...

    Science.gov (United States)

    2010-07-01

    ... compliance with the evaluation standards? (a) Compliance indicator 1—Philosophy—(1) Consumer control. (i) The... it is in minimum compliance with the evaluation standards? 366.63 Section 366.63 Education... REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION CENTERS FOR INDEPENDENT LIVING Evaluation Standards and...

  5. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  6. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  7. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  8. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  9. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  10. Data converters for wireless standards

    CERN Document Server

    Shi, Chunlei

    2002-01-01

    Wireless communication is witnessing tremendous growth with proliferation of different standards covering wide, local and personal area networks (WAN, LAN and PAN). The trends call for designs that allow 1) smooth migration to future generations of wireless standards with higher data rates for multimedia applications, 2) convergence of wireless services allowing access to different standards from the same wireless device, 3) inter-continental roaming. This requires designs that work across multiple wireless standards, can easily be reused, achieve maximum hardware share at a minimum power consumption levels particularly for mobile battery-operated devices.

  11. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  12. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  13. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  14. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  15. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  18. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  19. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  20. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  1. 31 CFR 904.4 - Minimum amount of referrals to the Department of Justice.

    Science.gov (United States)

    2010-07-01

    ... Department of Justice. 904.4 Section 904.4 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FEDERAL CLAIMS COLLECTION STANDARDS (DEPARTMENT OF THE TREASURY-DEPARTMENT OF JUSTICE) REFERRALS TO THE DEPARTMENT OF JUSTICE § 904.4 Minimum amount of referrals to the Department of Justice. (a...

  2. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  3. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  4. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  5. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  6. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  7. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  8. Visualizing the Sample Standard Deviation

    Science.gov (United States)

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  9. Minimal variance hedging of natural gas derivatives in exponential Lévy models: Theory and empirical performance

    International Nuclear Information System (INIS)

    Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen

    2013-01-01

    We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks

  10. 20 CFR 703.204 - Decision on insurance carrier's application; minimum amount of deposit.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Decision on insurance carrier's application; minimum amount of deposit. 703.204 Section 703.204 Employees' Benefits EMPLOYMENT STANDARDS ADMINISTRATION... on the Internet at http://www.dol.gov/esa/owcp/dlhwc/lstable.htm for both the current rating year and...

  11. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  12. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  13. Minimum critical crack depths in pressure vessels guidelines for nondestructive testing

    International Nuclear Information System (INIS)

    Crossley, M.R.; Townley, C.H.A.

    1983-09-01

    Estimates of the minimum critical depths which can be expected in high quality vessels designed to certain British and American Code rules are given. A simple means of allowing for fatigue crack growth in service is included. The data which are presented can be used to decide what sensitivity and what reporting levels should be employed during an ultrasonic inspection of a pressure vessel. It is emphasised that the minimum crack depths are those which would be relevant to a vessel in which the material is stressed to its maximum permitted value during operation. Stresses may, in practice, be significantly less than this. Less restrictive inspection standards may be established, if it were considered worthwhile to carry out a detailed stress analysis of the particular vessel under examination. (author)

  14. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  15. 36 CFR 292.13 - Standards.

    Science.gov (United States)

    2010-07-01

    ... operation, physical structures, or waste byproducts would not have significant adverse impacts on... include, but are not limited to, cement production, gravel extraction operations involving more than one... will conform with the following minimum standards: (1) Commercial development. (i) Stores, restaurants...

  16. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  17. Global variance in female population height: the influence of education, income, human development, life expectancy, mortality and gender inequality in 96 nations.

    Science.gov (United States)

    Mark, Quentin J

    2014-01-01

    Human height is a heritable trait that is known to be influenced by environmental factors and general standard of living. Individual and population stature is correlated with health, education and economic achievement. Strong sexual selection pressures for stature have been observed in multiple diverse populations, however; there is significant global variance in gender equality and prohibitions on female mate selection. This paper explores the contribution of general standard of living and gender inequality to the variance in global female population heights. Female population heights of 96 nations were culled from previously published sources and public access databases. Factor analysis with United Nations international data on education rates, life expectancy, incomes, maternal and childhood mortality rates, ratios of gender participation in education and politics, the Human Development Index (HDI) and the Gender Inequality Index (GII) was run. Results indicate that population heights vary more closely with gender inequality than with population health, income or education.

  18. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  19. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  20. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  1. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  2. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  3. [Storage of plant protection products in farms: minimum safety requirements].

    Science.gov (United States)

    Dutto, Moreno; Alfonzo, Santo; Rubbiani, Maristella

    2012-01-01

    Failure to comply with requirements for proper storage and use of pesticides in farms can be extremely hazardous and the risk of accidents involving farm workers, other persons and even animals is high. There are still wide differences in the interpretation of the concept of "securing or making safe", by workers in this sector. One of the critical points detected, particularly in the fruit sector, is the establishment of an adequate storage site for plant protection products. The definition of "safe storage of pesticides" is still unclear despite the recent enactment of Legislative Decree 81/2008 regulating health and work safety in Italy. In addition, there are no national guidelines setting clear minimum criteria for storage of plant protection products in farms. The authors, on the basis of their professional experience and through analysis of recent legislation, establish certain minimum safety standards for storage of pesticides in farms.

  4. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  5. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  6. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  7. Quantitative and Qualitative Responses to Topical Cold in Healthy Caucasians Show Variance between Individuals but High Test-Retest Reliability.

    Directory of Open Access Journals (Sweden)

    Penny Moss

    Full Text Available Increased sensitivity to cold may be a predictor of persistent pain, but cold pain threshold is often viewed as unreliable. This study aimed to determine the within-subject reliability and between-subject variance of cold response, measured comprehensively as cold pain threshold plus pain intensity and sensation quality at threshold. A test-retest design was used over three sessions, one day apart. Response to cold was assessed at four sites (thenar eminence, volar forearm, tibialis anterior, plantar foot. Cold pain threshold was measured using a Medoc thermode and standard method of limits. Intensity of pain at threshold was rated using a 10cm visual analogue scale. Quality of sensation at threshold was quantified with indices calculated from subjects' selection of descriptors from a standard McGill Pain Questionnaire. Within-subject reliability for each measure was calculated with intra-class correlation coefficients and between-subject variance was evaluated as group coefficient of variation percentage (CV%. Gender and site comparisons were also made. Forty-five healthy adults participated: 20 male, 25 female; mean age 29 (range 18-56 years. All measures at all four test sites showed high within-subject reliability: cold pain thresholds r = 0.92-0.95; pain rating r = 0.93-0.97; McGill pain quality indices r = 0.87-0.85. In contrast, all measures showed wide between-subject variance (CV% between 51.4% and 92.5%. Upper limb sites were consistently more sensitive than lower limb sites, but equally reliable. Females showed elevated cold pain thresholds, although similar pain intensity and quality to males. Females were also more reliable and showed lower variance for all measures. Thus, although there was clear population variation, response to cold for healthy individuals was found to be highly reliable, whether measured as pain threshold, pain intensity or sensation quality. A comprehensive approach to cold response testing therefore may add

  8. Quantitative and Qualitative Responses to Topical Cold in Healthy Caucasians Show Variance between Individuals but High Test-Retest Reliability.

    Science.gov (United States)

    Moss, Penny; Whitnell, Jasmine; Wright, Anthony

    2016-01-01

    Increased sensitivity to cold may be a predictor of persistent pain, but cold pain threshold is often viewed as unreliable. This study aimed to determine the within-subject reliability and between-subject variance of cold response, measured comprehensively as cold pain threshold plus pain intensity and sensation quality at threshold. A test-retest design was used over three sessions, one day apart. Response to cold was assessed at four sites (thenar eminence, volar forearm, tibialis anterior, plantar foot). Cold pain threshold was measured using a Medoc thermode and standard method of limits. Intensity of pain at threshold was rated using a 10cm visual analogue scale. Quality of sensation at threshold was quantified with indices calculated from subjects' selection of descriptors from a standard McGill Pain Questionnaire. Within-subject reliability for each measure was calculated with intra-class correlation coefficients and between-subject variance was evaluated as group coefficient of variation percentage (CV%). Gender and site comparisons were also made. Forty-five healthy adults participated: 20 male, 25 female; mean age 29 (range 18-56) years. All measures at all four test sites showed high within-subject reliability: cold pain thresholds r = 0.92-0.95; pain rating r = 0.93-0.97; McGill pain quality indices r = 0.87-0.85. In contrast, all measures showed wide between-subject variance (CV% between 51.4% and 92.5%). Upper limb sites were consistently more sensitive than lower limb sites, but equally reliable. Females showed elevated cold pain thresholds, although similar pain intensity and quality to males. Females were also more reliable and showed lower variance for all measures. Thus, although there was clear population variation, response to cold for healthy individuals was found to be highly reliable, whether measured as pain threshold, pain intensity or sensation quality. A comprehensive approach to cold response testing therefore may add validity and

  9. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  10. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  11. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  12. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  13. Modified unscented Kalman filter using modified filter gain and variance scale factor for highly maneuvering target tracking

    Institute of Scientific and Technical Information of China (English)

    Changyun Liu; Penglang Shui; Gang Wei; Song Li

    2014-01-01

    To improve the low tracking precision caused by lagged filter gain or imprecise state noise when the target highly maneu-vers, a modified unscented Kalman filter algorithm based on the improved filter gain and adaptive scale factor of state noise is pre-sented. In every filter process, the estimated scale factor is used to update the state noise covariance Qk, and the improved filter gain is obtained in the filter process of unscented Kalman filter (UKF) via predicted variance Pk|k-1, which is similar to the standard Kalman filter. Simulation results show that the proposed algorithm provides better accuracy and ability to adapt to the highly maneu-vering target compared with the standard UKF.

  14. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  15. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  16. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  17. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  18. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime.

    Science.gov (United States)

    Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi

    2017-07-01

    Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.

  19. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  20. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...