WorldWideScience

Sample records for advanced variance reduction

  1. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  2. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  3. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  4. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  5. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....

  6. Dimension reduction based on weighted variance estimate

    Institute of Scientific and Technical Information of China (English)

    ZHAO JunLong; XU XingZhong

    2009-01-01

    In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.

  7. Dimension reduction based on weighted variance estimate

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.

  8. Delivery Time Variance Reduction in the Military Supply Chain

    Science.gov (United States)

    2010-03-01

    DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston

  9. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

  10. Variance reduction methods applied to deep-penetration problems

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.

  11. Automated Variance Reduction Applied to Nuclear Well-Logging Problems

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Evans, Thomas M [ORNL

    2009-01-01

    The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is

  12. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  13. Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse

    CERN Document Server

    Gambhir, Arjun Singh; Orginos, Kostas

    2016-01-01

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...

  14. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    Science.gov (United States)

    Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.

    2012-01-01

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  15. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    Science.gov (United States)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  16. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th

  17. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    Energy Technology Data Exchange (ETDEWEB)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.

  18. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, Peter; Varghese, Philip; Goldstein, David [ASE-EM Department, UT Austin, 210 East 24th St, C0600, Austin, TX 78712 (United States)

    2014-12-09

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.

  19. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  20. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Codina, F., E-mail: fvidal@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Nguyen, N.C., E-mail: cuongng@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Oxford (United Kingdom); Peraire, J., E-mail: peraire@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  1. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    Science.gov (United States)

    Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.

    2015-09-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  2. An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction

    CERN Document Server

    Adewunmi, Adrian; Byrne, Mike

    2008-01-01

    This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.

  3. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  4. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  5. Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)

    1998-03-01

    Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)

  6. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  7. Variance reduction techniques for a quantitative understanding of the \\Delta I = 1/2 rule

    CERN Document Server

    Endress, Eric

    2012-01-01

    The role of the charm quark in the dynamics underlying the \\Delta I = 1/2 rule for kaon decays can be understood by studying the dependence of kaon decay amplitudes on the charm quark mass using an effective \\Delta S = 1 weak Hamiltonian in which the charm is kept as an active degree of freedom. Overlap fermions are employed in order to avoid renormalization problems, as well as to allow access to the deep chiral regime. Quenched results in the GIM limit have shown that a significant part of the enhancement is purely due to low-energy QCD effects; variance reduction techniques based on low-mode averaging were instrumental in determining the relevant weak effective lowenergy couplings in this case. Moving away from the GIM limit requires the computation of diagrams containing closed quark loops. We report on our progress to employ a combination of low-mode averaging and stochastic volume sources in order to control these contributions. Results showing a significant improvement in the statistical signal are pre...

  8. Advanced digital signal processing and noise reduction

    CERN Document Server

    Vaseghi, Saeed V

    2008-01-01

    Digital signal processing plays a central role in the development of modern communication and information processing systems. The theory and application of signal processing is concerned with the identification, modelling and utilisation of patterns and structures in a signal process. The observation signals are often distorted, incomplete and noisy and therefore noise reduction, the removal of channel distortion, and replacement of lost samples are important parts of a signal processing system. The fourth edition of Advanced Digital Signal Processing and Noise Reduction updates an

  9. Advanced Data Reduction Techniques for MUSE

    CERN Document Server

    Weilbacher, Peter M; Roth, Martin M; Boehm, Petra; Pecontal-Rousset, Arlette

    2009-01-01

    MUSE, a 2nd generation VLT instrument, will become the world's largest integral field spectrograph. It will be an AO assisted instrument which, in a single exposure, covers the wavelength range from 465 to 930 nm with an average resolution of 3000 over a field of view of 1'x1' with 0.2'' spatial sampling. Both the complexity and the rate of the data are a challenge for the data processing of this instrument. We will give an overview of the data processing scheme that has been designed for MUSE. Specifically, we will use only a single resampling step from the raw data to the reduced data product. This allows us to improve data quality, accurately propagate variance, and minimize spreading of artifacts and correlated noise. This approach necessitates changes to the standard way in which reduction steps like wavelength calibration and sky subtraction are carried out, but can be expanded to include combination of multiple exposures.

  10. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    Science.gov (United States)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  11. Noise reduction method for nonlinear signal based on maximum variance unfolding and its application to fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A new noise reduction method for nonlinear signal based on maximum variance unfolding(MVU)is proposed.The noisy sig- nal is firstly embedded into a high-dimensional phase space based on phase space reconstruction theory,and then the manifold learning algorithm MVU is used to perform nonlinear dimensionality reduction on the data of phase space in order to separate low-dimensional manifold representing the attractor from noise subspace.Finally,the noise-reduced signal is obtained through reconstructing the low-dimensional manifold.The simulation results of Lorenz system show that the proposed MVU-based noise reduction method outperforms the KPCA-based method and has the advantages of simple parameter estimation and low parameter sensitivity.The proposed method is applied to fault detection of a vibration signal from rotor-stator of aero engine with slight rubbing fault.The denoised results show that the slight rubbing features overwhelmed by noise can be effectively extracted by the proposed noise reduction method.

  12. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2012-02-15

    heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.

  13. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    Science.gov (United States)

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.

  14. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    Science.gov (United States)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  15. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  16. Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics

    Science.gov (United States)

    Bushnell, Dennis M.

    2000-01-01

    This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.

  17. Cycle update : advanced fuels and technologies for emissions reduction

    Energy Technology Data Exchange (ETDEWEB)

    Smallwood, G. [National Research Council of Canada, Ottawa, ON (Canada)

    2009-07-01

    This paper provided a summary of key achievements of the Program of Energy Research and Development advanced fuels and technologies for emissions reduction (AFTER) program over the funding cycle from fiscal year 2005/2006 to 2008/2009. The purpose of the paper was to inform interested parties of recent advances in knowledge and in science and technology capacities in a concise manner. The paper discussed the high level research and development themes of the AFTER program through the following 4 overarching questions: how could advanced fuels and internal combustion engine designs influence emissions; how could emissions be reduced through the use of engine hardware including aftertreatment devices; how do real-world duty cycles and advanced technology vehicles operating on Canadian fuels compare with existing technologies, models and estimates; and what are the health risks associated with transportation-related emissions. It was concluded that the main issues regarding the use of biodiesel blends in current technology diesel engines are the lack of consistency in product quality; shorter shelf life of biodiesel due to poorer oxidative stability; and a need to develop characterization methods for the final oxygenated product because most standard methods are developed for hydrocarbons and are therefore inadequate. 2 tabs., 13 figs.

  18. Recent advances in the kinetics of oxygen reduction

    Energy Technology Data Exchange (ETDEWEB)

    Adzic, R.

    1996-07-01

    Oxygen reduction is considered an important electrocatalytic reaction; the most notable need remains improvement of the catalytic activity of existing metal electrocatalysts and development of new ones. A review is given of new advances in the understanding of reaction kinetics and improvements of the electrocatalytic properties of some surfaces, with focus on recent studies of relationship of the surface properties to its activity and reaction kinetics. The urgent need is to improve catalytic activity of Pt and synthesize new, possibly non- noble metal catalysts. New experimental techniques for obtaining new level of information include various {ital in situ} spectroscopies and scanning probes, some involving synchrotron radiation. 138 refs, 18 figs, 2 tabs.

  19. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  20. Active Vibration Reduction of the Advanced Stirling Convertor

    Science.gov (United States)

    Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.

    2016-01-01

    Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC

  1. Low cost biological lung volume reduction therapy for advanced emphysema

    Directory of Open Access Journals (Sweden)

    Bakeer M

    2016-08-01

    Full Text Available Mostafa Bakeer,1 Taha Taha Abdelgawad,1 Raed El-Metwaly,1 Ahmed El-Morsi,1 Mohammad Khairy El-Badrawy,1 Solafa El-Sharawy2 1Chest Medicine Department, 2Clinical Pathology Department, Faculty of Medicine, Mansoura University, Mansoura, Egypt Background: Bronchoscopic lung volume reduction (BLVR, using biological agents, is one of the new alternatives to lung volume reduction surgery.Objectives: To evaluate efficacy and safety of biological BLVR using low cost agents including autologous blood and fibrin glue.Methods: Enrolled patients were divided into two groups: group A (seven patients in which autologous blood was used and group B (eight patients in which fibrin glue was used. The agents were injected through a triple lumen balloon catheter via fiberoptic bronchoscope. Changes in high resolution computerized tomography (HRCT volumetry, pulmonary function tests, symptoms, and exercise capacity were evaluated at 12 weeks postprocedure as well as for complications.Results: In group A, at 12 weeks postprocedure, there was significant improvement in the mean value of HRCT volumetry and residual volume/total lung capacity (% predicted (P-value: <0.001 and 0.038, respectively. In group B, there was significant improvement in the mean value of HRCT volumetry and (residual volume/total lung capacity % predicted (P-value: 0.005 and 0.004, respectively. All patients tolerated the procedure with no mortality.Conclusion: BLVR using autologous blood and locally prepared fibrin glue is a promising method for therapy of advanced emphysema in term of efficacy, safety as well as cost effectiveness. Keywords: BLVR, bronchoscopy, COPD, interventional pulmonology

  2. Development of new source diagnostic methods and variance reduction techniques for Monte Carlo eigenvalue problems with a focus on high dominance ratio problems

    Science.gov (United States)

    Wenner, Michael T.

    Obtaining the solution to the linear Boltzmann equation is often is often a daunting task. The time-independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised. This work focuses on improving Monte Carlo methods for its solution in eigenvalue form. First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test. The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source convergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks. Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass information on hand visually aids in the understanding of the problem physics. The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was adapted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed which utilizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only. Since each particle

  3. Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this project advanced acoustic blankets for improved low frequency interior noise control in aircraft will be developed and demonstrated. The improved performance...

  4. Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed Phase II research effort is to develop heterogeneous (HG) blankets for improved sound reduction in aircraft structures. Phase I...

  5. Targeted reduction of advanced glycation improves renal function in obesity

    DEFF Research Database (Denmark)

    Harcourt, Brooke E; Sourris, Karly C; Coughlan, Melinda T

    2011-01-01

    function and an inflammatory profile (monocyte chemoattractant protein-1 (MCP-1) and macrophage migration inhibitory factor (MIF)) were improved following the low-AGE diet. Mechanisms of advanced glycation-related renal damage were investigated in a mouse model of obesity using the AGE...

  6. Recent advancements in mechanical reduction methods: particulate systems.

    Science.gov (United States)

    Leleux, Jardin; Williams, Robert O

    2014-03-01

    The screening of new active pharmaceutical ingredients (APIs) has become more streamlined and as a result the number of new drugs in the pipeline is steadily increasing. However, a major limiting factor of new API approval and market introduction is the low solubility associated with a large percentage of these new drugs. While many modification strategies have been studied to improve solubility such as salt formation and addition of cosolvents, most provide only marginal success and have severe disadvantages. One of the most successful methods to date is the mechanical reduction of drug particle size, inherently increasing the surface area of the particles and, as described by the Noyes-Whitney equation, the dissolution rate. Drug micronization has been the gold standard to achieve these improvements; however, the extremely low solubility of some new chemical entities is not significantly affected by size reduction in this range. A reduction in size to the nanometric scale is necessary. Bottom-up and top-down techniques are utilized to produce drug crystals in this size range; however, as discussed in this review, top-down approaches have provided greater enhancements in drug usability on the industrial scale. The six FDA approved products that all exploit top-down approaches confirm this. In this review, the advantages and disadvantages of both approaches will be discussed in addition to specific top-down techniques and the improvements they contribute to the pharmaceutical field.

  7. Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells

    Science.gov (United States)

    Ali, Mohamed Mahmoud; Kvande, Halvor

    2017-02-01

    There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.

  8. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  9. Advanced Exploration Systems (AES) Logistics Reduction and Repurposing Project: Advanced Clothing Ground Study Final Report

    Science.gov (United States)

    Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini

    2013-01-01

    All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web

  10. Application of FastSLAM based on simulated annealing variance reduction in navigation and localization of AUV%模拟退火方差缩减的FastSLAM算法在AUV导航定位中的应用

    Institute of Scientific and Technical Information of China (English)

    王宏健; 王晶; 曲丽萍; 刘振业

    2013-01-01

    The FastSLAM algorithm based on variance reduction of particle weight was presented in order to solve the decrease of estimated accuracy of AUV ( autonomous underwater vehicle) , location due to particles degeneracy and the sample impoverishment as a result of resampling in standard FastSLAM. The variance of particle weight was decreased by generating an adaptive exponential fading factor, which came from the thought of cooling function in simulated annealing. The effective particle number was increased by application of FastSLAM based on simulated annealing variance reduction in navigation and localization of AUV. Resampling in standard FastSLAM was replaced with it. Establish the kinematic model of AUV, feature model and measurement models of sensors, and make feature extraction with Hough transform. The experiment of AUV's simultaneous localization and mapping u-sing simulated annealing variance reduction FastSLAM was based on trial data. The results indicate that the method described in this paper maintains the diversity of the particles, however, weakens the degeneracy, while at the same time enhances the accuracy stability of AUV's navigation and localization system.%由于标准FastSLAM中存在粒子退化及重采样引起的粒子贫化,导致自主水下航行器(AUV)位置估计精度严重下降的问题,提出了一种基于粒子权值方差缩减的FastSLAM算法.利用模拟退火的降温函数产生自适应指数渐消因子来降低粒子权值的方差,进而增加有效粒子数,以此取代标准FastSLAM中的重采样步骤.建立AUV的运动学模型、特征模型及传感器的测量模型,通过霍夫变换进行特征提取.利用方差缩减FastSLAM算法,基于海试数据进行了AUV同步定位与构图仿真试验,结果表明所提方法能够保证粒子的多样性,并且降低粒子的退化程度,提高了AUV定位与地图构建系统的准确性及稳定性.

  11. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    Institute of Scientific and Technical Information of China (English)

    WANG Zhi-hua; ZHOU Jun-hu; ZHANG Yan-wei; LU Zhi-min; FAN Jian-ren; CEN Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 ℃ to 1400 ℃ and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 ℃ and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 ℃~1100 ℃. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.

  12. Chemical oxygen demand reduction in coffee wastewater through chemical flocculation and advanced oxidation processes

    Institute of Scientific and Technical Information of China (English)

    ZAYAS Pérez Teresa; GEISSLER Gunther; HERNANDEZ Fernando

    2007-01-01

    The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculatio and advanced oxidation processes(AOP)had been studied.The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H202,UVO3 and UV/H-H202/O3 processes was determined under acidic conditions.For each of these processes,different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater.Coffee wastewater is characterized by a high chemical oxygen demand(COD)and low total suspended solids.The outcomes of coffee wastewater reeatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD,color,and turbidity.It was found that a reductiOn in COD of 67%could be realized when the coffee wastewater was treated by chemical coagulation-flocculatlon witll lime and coagulant T-1.When coffee wastewater was treated by coagulation-flocculation in combination with UV/H202,a COD reduction of 86%was achieved,although only after prolonged UV irradiation.Of the three advanced oxidation processes considered,UV/H202,uv/03 and UV/H202/03,we found that the treatment with UV/H2O2/O3 was the most effective,with an efficiency of color,turbidity and further COD removal of 87%,when applied to the flocculated coffee wastewater.

  13. Removal of PCBs in contaminated soils by means of chemical reduction and advanced oxidation processes.

    Science.gov (United States)

    Rybnikova, V; Usman, M; Hanna, K

    2016-09-01

    Although the chemical reduction and advanced oxidation processes have been widely used individually, very few studies have assessed the combined reduction/oxidation approach for soil remediation. In the present study, experiments were performed in spiked sand and historically contaminated soil by using four synthetic nanoparticles (Fe(0), Fe/Ni, Fe3O4, Fe3 - x Ni x O4). These nanoparticles were tested firstly for reductive transformation of polychlorinated biphenyls (PCBs) and then employed as catalysts to promote chemical oxidation reactions (H2O2 or persulfate). Obtained results indicated that bimetallic nanoparticles Fe/Ni showed the highest efficiency in reduction of PCB28 and PCB118 in spiked sand (97 and 79 %, respectively), whereas magnetite (Fe3O4) exhibited a high catalytic stability during the combined reduction/oxidation approach. In chemical oxidation, persulfate showed higher PCB degradation extent than hydrogen peroxide. As expected, the degradation efficiency was found to be limited in historically contaminated soil, where only Fe(0) and Fe/Ni particles exhibited reductive capability towards PCBs (13 and 18 %). In oxidation step, the highest degradation extents were obtained in presence of Fe(0) and Fe/Ni (18-19 %). The increase in particle and oxidant doses improved the efficiency of treatment, but overall degradation extents did not exceed 30 %, suggesting that only a small part of PCBs in soil was available for reaction with catalyst and/or oxidant. The use of organic solvent or cyclodextrin to improve the PCB availability in soil did not enhance degradation efficiency, underscoring the strong impact of soil matrix. Moreover, a better PCB degradation was observed in sand spiked with extractable organic matter separated from contaminated soil. In contrast to fractions with higher particle size (250-500 and oxidation reactions in soils and understand the impact of soil properties on remediation performance.

  14. Conversations across Meaning Variance

    Science.gov (United States)

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  15. Tumor Volume Reduction Rate After Preoperative Chemoradiotherapy as a Prognostic Factor in Locally Advanced Rectal Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Yeo, Seung-Gu [Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang (Korea, Republic of); Department of Radiation Oncology, Soonchunhyang University College of Medicine, Cheonan (Korea, Republic of); Kim, Dae Yong, E-mail: radiopiakim@hanmail.net [Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang (Korea, Republic of); Park, Ji Won; Oh, Jae Hwan; Kim, Sun Young; Chang, Hee Jin; Kim, Tae Hyun; Kim, Byung Chang; Sohn, Dae Kyung; Kim, Min Ju [Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, Goyang (Korea, Republic of)

    2012-02-01

    Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3-4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume - post-CRT tumor volume) Multiplication-Sign 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27-99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% {+-} 22.6%; range, 0-100%). Downstaging (ypT0-2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.

  16. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    Science.gov (United States)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  17. Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector

    Energy Technology Data Exchange (ETDEWEB)

    Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.

    2014-09-01

    Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.

  18. DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION

    Energy Technology Data Exchange (ETDEWEB)

    Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson

    2002-02-01

    The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.

  19. Materials selection of surface coatings in an advanced size reduction facility. [For decommissioned stainless steel equipment

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, J. L.; Younger, A. F.

    1980-06-02

    A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests.

  20. Treatment experience of using Liuwei Dihuang Wan variance in treating advanced primary liver cancer%六味地黄丸变方治疗中晚期肝癌经验

    Institute of Scientific and Technical Information of China (English)

    储真真; 陈历宏; 陈信义

    2013-01-01

    笔者依据原发性肝癌不同治疗阶段(即肝癌中晚期、肝癌介入治疗后、肝癌放疗后)的不同证候特点,运用六味地黄丸变方(柴芍地黄丸、知柏地黄丸、杞菊地黄丸、芪麦地黄丸、归芍地黄丸等),分别对其进行治疗.阐述了不同治疗阶段的证候特点和治疗方法,并以“案例”形式,通过“按语”对不同阶段的治疗用药思路进行解释,介绍六味地黄丸变方在中晚期原发性肝癌治疗中的应用、治疗经验,并概括介绍了中药治疗肝癌的配伍要点和体会.其自拟方芪麦地黄丸治疗中晚期原发性肝癌的临床应用,为临床治疗该病提供了新的思路.%The author is flexibility in the use of Liuwei Dihuang Wan variance (Chalshao Dihuang Wan,Zhibal Dihuang Wan,Qiju Dihuang Wan,Qimai Dihuang Wan,Guishao Dihuang Wan),according to the different syndrome characteristics of different treatment stages of primary liver cancer (that are advanced stage of liver cancer,liver cancer after interventional therapy,liver cancer after radiotherapy),to carry on the treatment respectively.The writer expounds the syndrome characteristics and therapeutic methods of different treatment stages,and explains in detail the thought of treatment and medication of different treatment stages through the ‘note' in the form of ‘disease record'.The author also explains profound theories in simple language in the application and treatment experience of Liuwei Dihuang Wan variance in the treatment of advanced primary liver cancer,and briefly introduces the key points of compatibility and experience of Chinese herbal medicine in treating liver cancer.The clinical application of Qimai Dihuang Wan in the treatment of primary liver cancer provides a new approach for the clinical treatment of the disease.

  1. Nominal analysis of "variance".

    Science.gov (United States)

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  2. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  3. Advanced Chemical Reduction of Reduced Graphene Oxide and Its Photocatalytic Activity in Degrading Reactive Black 5

    Directory of Open Access Journals (Sweden)

    Christelle Pau Ping Wong

    2015-10-01

    Full Text Available Textile industries consume large volumes of water for dye processing, leading to undesirable toxic dyes in water bodies. Dyestuffs are harmful to human health and aquatic life, and such illnesses as cholera, dysentery, hepatitis A, and hinder the photosynthetic activity of aquatic plants. To overcome this environmental problem, the advanced oxidation process is a promising technique to mineralize a wide range of dyes in water systems. In this work, reduced graphene oxide (rGO was prepared via an advanced chemical reduction route, and its photocatalytic activity was tested by photodegrading Reactive Black 5 (RB5 dye in aqueous solution. rGO was synthesized by dispersing the graphite oxide into the water to form a graphene oxide (GO solution followed by the addition of hydrazine. Graphite oxide was prepared using a modified Hummers’ method by using potassium permanganate and concentrated sulphuric acid. The resulted rGO nanoparticles were characterized using ultraviolet-visible spectrophotometry (UV-Vis, X-ray powder diffraction (XRD, Raman, and Scanning Electron Microscopy (SEM to further investigate their chemical properties. A characteristic peak of rGO-48 h (275 cm−1 was observed in the UV spectrum. Further, the appearance of a broad peak (002, centred at 2θ = 24.1°, in XRD showing that graphene oxide was reduced to rGO. Based on our results, it was found that the resulted rGO-48 h nanoparticles achieved 49% photodecolorization of RB5 under UV irradiation at pH 3 in 60 min. This was attributed to the high and efficient electron transport behaviors of rGO between aromatic regions of rGO and RB5 molecules.

  4. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes.

    Science.gov (United States)

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs.

  5. Conceptual design study of advanced acoustic composite nacelle. [for achieving reductions in community noise and operating expense

    Science.gov (United States)

    Goodall, R. G.; Painter, G. W.

    1975-01-01

    Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.

  6. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju, E-mail: jjgeng@nju.edu.cn; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe{sup 2+}/H{sub 2}O{sub 2}) and UV/H{sub 2}O{sub 2} process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H{sub 2}O{sub 2} process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H{sub 2}O{sub 2} method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe{sup 2+}/H{sub 2}O{sub 2} had a molar ratio of 0.1 and a H{sub 2}O{sub 2} concentration of 0.01 mol L{sup −1} with a pH of 3.0 and reaction time of 2 h, 2.58–3.79 logs of target genes were removed. Under the initial effluent pH condition (pH = 7.0), the removal was 2.26–3.35 logs. For the UV/H{sub 2}O{sub 2} process, when the pH was 3.5 with a H{sub 2}O{sub 2} concentration of 0.01 mol L{sup −1} accompanied by 30 min of UV irradiation, all ARGs could achieve a reduction of 2.8–3.5 logs, and 1.55–2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H{sub 2}O{sub 2} process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe{sup 2+}/H{sub 2}O{sub 2} molar ratios, H{sub 2}O{sub 2} concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs. - Highlights: • AOPs including Fenton oxidation and UV/H{sub 2}O{sub 2} process could reduce ARGs effectively. • Fenton oxidation is slightly more effective than UV/H{sub 2}O{sub 2} process in ARG reduction. • Removal of ARGs by AOPs follows the first-order reaction kinetic model. • Selected ARGs and 16S rRNA genes exhibit similar change trends during AOPs.

  7. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    Science.gov (United States)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  8. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  9. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  10. Bronchoscopic lung volume reduction by endobronchial valve in advanced emphysema: the first Asian report

    Directory of Open Access Journals (Sweden)

    Park TS

    2015-07-01

    Full Text Available Tai Sun Park,1 Yoonki Hong,2 Jae Seung Lee,1 Sang Young Oh,3 Sang Min Lee,3 Namkug Kim,3 Joon Beom Seo,3 Yeon-Mok Oh,1 Sang-Do Lee,1 Sei Won Lee1 1Department of Pulmonary and Critical Care Medicine and Clinical Research Center for Chronic Obstructive Airway Diseases, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea; 2Department of Internal Medicine, College of Medicine, Kangwon National University, Chuncheon, Korea; 3Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea Purpose: Endobronchial valve (EBV therapy is increasingly being seen as a therapeutic option for advanced emphysema, but its clinical utility in Asian populations, who may have different phenotypes to other ethnic populations, has not been assessed.Patients and methods: This prospective open-label single-arm clinical trial examined the clinical efficacy and the safety of EBV in 43 consecutive patients (mean age 68.4±7.5, forced expiratory volume in 1 second [FEV1] 24.5%±10.7% predicted, residual volume 208.7%±47.9% predicted with severe emphysema with complete fissure and no collateral ventilation in a tertiary referral hospital in Korea.Results: Compared to baseline, the patients exhibited significant improvements 6 months after EBV therapy in terms of FEV1 (from 0.68±0.26 L to 0.92±0.40 L; P<0.001, 6-minute walk distance (from 233.5±114.8 m to 299.6±87.5 m; P=0.012, modified Medical Research Council dyspnea scale (from 3.7±0.6 to 2.4±1.2; P<0.001, and St George’s Respiratory Questionnaire (from 65.59±13.07 to 53.76±11.40; P=0.028. Nine patients (20.9% had a tuberculosis scar, but these scars did not affect target lobe volume reduction or pneumothorax frequency. Thirteen patients had adverse events, ten (23.3% developed pneumothorax, which included one death due to tension pneumothorax.Conclusion: EBV therapy was as effective and safe in Korean

  11. Advanced RF-KO slow-extraction method for the reduction of spill ripple

    CERN Document Server

    Noda, K; Shibuya, S; Uesugi, T; Muramatsu, M; Kanazawa, M; Takada, E; Yamada, S

    2002-01-01

    Two advanced RF-knockout (RF-KO) slow-extraction methods have been developed at HIMAC in order to reduce the spill ripple for accurate heavy-ion cancer therapy: the dual frequency modulation (FM) method and the separated function method. As a result of simulations and experiments, it was verified that the spill ripple could be considerably reduced using these advanced methods, compared with the ordinary RF-KO method. The dual FM method and the separated function method bring about a low spill ripple within standard deviations of around 25% and of 15% during beam extraction within around 2 s, respectively, which are in good agreement with the simulation results.

  12. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Sorge, J.N. [Southern Co. Services, Inc., Birmingham, AL (United States); Menzies, B. [Radian Corp., Austin, TX (United States); Smouse, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Stallings, J.W. [Electric Power Research Inst., Palo Alto, CA (United States)

    1995-09-01

    Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  13. Advanced experimental analysis of controls on microbial Fe(III) oxide reduction. First year progress report

    Energy Technology Data Exchange (ETDEWEB)

    Roden, E.E.; Urrutia, M.M.

    1997-07-01

    'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and

  14. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen;

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...

  15. Tungsten Contact and Line Resistance Reduction with Advanced Pulsed Nucleation Layer and Low Resistivity Tungsten Treatment

    Science.gov (United States)

    Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi

    2010-09-01

    This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.

  16. NMR Studies of Structure-Reactivity Relationships in Carbonyl Reduction: A Collaborative Advanced Laboratory Experiment

    Science.gov (United States)

    Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab

    2012-01-01

    An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…

  17. Effect of Two Advanced Noise Reduction Technologies on the Aerodynamic Performance of an Ultra High Bypass Ratio Fan

    Science.gov (United States)

    Hughes, Christoper E.; Gazzaniga, John A.

    2013-01-01

    A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.

  18. Energy Saving Melting and Revert Reduction Technology (Energy SMARRT): Manufacturing Advanced Engineered Components Using Lost Foam Casting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Littleton, Harry; Griffin, John

    2011-07-31

    This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).

  19. Development of Head-end Pyrochemical Reduction Process for Advanced Oxide Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Park, B. H.; Seo, C. S.; Hur, J. M.; Jeong, S. M.; Hong, S. S.; Choi, I. K.; Choung, W. M.; Kwon, K. C.; Lee, I. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-12-15

    The development of an electrolytic reduction technology for spent fuels in the form of oxide is of essence to introduce LWR SFs to a pyroprocessing. In this research, the technology was investigated to scale a reactor up, the electrochemical behaviors of FPs were studied to understand the process and a reaction rate data by using U{sub 3}O{sub 8} was obtained with a bench scale reactor. In a scale of 20 kgHM/batch reactor, U{sub 3}O{sub 8} and Simfuel were successfully reduced into metals. Electrochemical characteristics of LiBr, LiI and Li{sub 2}Se were measured in a bench scale reactor and an electrolytic reduction cell was modeled by a computational tool.

  20. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    Science.gov (United States)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2017-02-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  1. ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION

    Energy Technology Data Exchange (ETDEWEB)

    Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B

    2006-11-17

    Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.

  2. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  3. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....

  4. Armor Possibilities and Radiographic Blur Reduction for The Advanced Hydrotest Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hackett, M

    2001-09-01

    Currently at Lawrence Livermore National Laboratory (LLNL) a composite firing vessel is under development for the Advanced Hydrotest Facility (AHF) to study high explosives. This vessel requires a shrapnel mitigating layer to protect the vessel during experiments. The primary purpose of this layer is to protect the vessel, yet the material must be transparent to proton radiographs. Presented here are methods available to collect data needed before selection, along with a comparison tool developed to aid in choosing a material that offers the best of ballistic protection while allowing for clear radiographs.

  5. Advances of Model Order Reduction Research in Large-scale System Simulation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be red...

  6. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  7. Spontaneous reduction of advanced twin embryos: its occurrence and clinical relevance in dairy cattle.

    Science.gov (United States)

    López-Gatius, F; Hunter, R H F

    2005-01-01

    Twin pregnancies represent a management problem in dairy cattle since the risk of pregnancy loss increases, and the profitability of the herd diminishes drastically as the frequency of twin births increases. The aim of this study was to monitor the development of 211 twin pregnancies in high producing dairy cows in order to determine the best time for an embryo reduction approach. Pregnancy was diagnosed by transrectal ultrasonography between 36 and 42 days after insemination. Animals were then subjected to weekly ultrasound examination until Day 90 of gestation or until pregnancy loss. Viability was determined by monitoring the embryonic/fetal heartbeat until Day 50 of pregnancy, and then by heartbeat or fetal movement detection. Eighty-six cows (40.8%) bore bilateral and 125 (59.2%) unilateral twin pregnancies. Embryo death was registered in one of the two embryos in 35 cows (16.6%), 33 of them at pregnancy diagnosis. Pregnancy loss occurred in 22 of these cows between 1 and 4 weeks later. Thus, 13 (6.2% of the total animals) cows, carrying one dead of the two embryos, maintained gestation. Total pregnancy loss before Day 90 of pregnancy (mean 69 +/- 14 days) was registered in 51 (24.2%) cows: 7 (8%) of bilateral pregnancies and 44 (35.2%) of unilateral pregnancies, and it was higher (P = 0.0001) for both right (32.4%, 24/74) and left (39.2%, 20/51) unilateral than for bilateral (8.1%, 7/86) twin pregnancies. The single embryo death rate was significantly (P = 0.02) lower for cows with bilateral twins (9.3%, 8/86) than for total cows with unilateral twins (21.6%, 27/125). By way of overall conclusion, embryo reduction can occur in dairy cattle, and the practical perspective remains that most embryonic mortality in twins (one of the two embryos) occurs around Days 35-40 of gestation, the period when pregnancy diagnosis is generally performed and when embryo reduction could be tried.

  8. Comprehensive Study on the Estimation of the Variance Components of Traverse Nets

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.

  9. Gate Leakage Current Reduction With Advancement of Graded Barrier AlGaN/GaN HEMT

    Directory of Open Access Journals (Sweden)

    Palash Das

    2011-01-01

    Full Text Available The gate leakage current reduction solution of AlGaN/GaN HEMT device issue has been addressed in this paper with compositional grading of AlGaN barrier layer. This work is also conjugated with the critical thickness limitation of heterostructure material growth. Hence, critical thickness calculation of AlGaN over GaN has been kept in special view. 1D Schrodinger and Poisson solver was used to calculate the 2DEG concentration and effective location to use it in the ATLAS device simulator for the predictions. The proposed Al0.50Ga0.50N/Al0.35Ga0.65N/Al0.20Ga0.80N/GaN HEMT structure exhibits the leakage current of the order of around 15 nA/mm at gate voltage of 1 V.

  10. Cobalt diselenide nanoparticles embedded within porous carbon polyhedra as advanced electrocatalyst for oxygen reduction reaction

    Science.gov (United States)

    Wu, Renbing; Xue, Yanhong; Liu, Bo; Zhou, Kun; Wei, Jun; Chan, Siew Hwa

    2016-10-01

    Highly efficient and cost-effective electrocatalyst for the oxygen reduction reaction (ORR) is crucial for a variety of renewable energy applications. Herein, strongly coupled hybrid composites composed of cobalt diselenide (CoSe2) nanoparticles embedded within graphitic carbon polyhedra (GCP) as high-performance ORR catalyst have been rationally designed and synthesized. The catalyst is fabricated by a convenient method, which involves the simultaneous pyrolysis and selenization of preformed Co-based zeolitic imidazolate framework (ZIF-67). Benefiting from the unique structural features, the resulting CoSe2/GCP hybrid catalyst shows high stability and excellent electrocatalytic activity towards ORR (the onset and half-wave potentials are 0.935 and 0.806 V vs. RHE, respectively), which is superior to the state-of-the-art commercial Pt/C catalyst (0.912 and 0.781 V vs. RHE, respectively).

  11. Boundary layer drag reduction research hypotheses derived from bio-inspired surface and recent advanced applications.

    Science.gov (United States)

    Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe

    2015-12-01

    Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering.

  12. 2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Aaron; Stehly, Tyler; Walter Musial

    2015-09-29

    2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.

  13. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  14. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  15. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  17. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....

  18. Variance-based uncertainty relations

    CERN Document Server

    Huang, Yichen

    2010-01-01

    It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.

  19. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  20. Neutrino mass without cosmic variance

    CERN Document Server

    LoVerde, Marilena

    2016-01-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...

  1. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  2. Warped functional analysis of variance.

    Science.gov (United States)

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  3. The ACS Virgo Cluster Survey IV: Data Reduction Procedures for Surface Brightness Fluctuation Measurements with the Advanced Camera for Surveys

    CERN Document Server

    Mei, S; Tonry, J L; Jordan, A; Peng, E W; Côté, P; Ferrarese, L; Merritt, D; Milosavljevic, M; West, M J; Mei, Simona; Blakeslee, John P.; Tonry, John L.; Jordan, Andres; Peng, Eric W.; Cote, Patrick; Ferrarese, Laura; Merritt, David; Milosavljevic, Milos; West, Michael J.

    2005-01-01

    The Advanced Camera for Surveys (ACS) Virgo Cluster Survey is a large program to image 100 early-type Virgo galaxies using the F475W and F850LP bandpasses of the Wide Field Channel of the ACS instrument on the Hubble Space Telescope (HST). The scientific goals of this survey include an exploration of the three-dimensional structure of the Virgo Cluster and a critical examination of the usefulness of the globular cluster luminosity function as a distance indicator. Both of these issues require accurate distances for the full sample of 100 program galaxies. In this paper, we describe our data reduction procedures and examine the feasibility of accurate distance measurements using the method of surface brightness fluctuations (SBF) applied to the ACS Virgo Cluster Survey F850LP imaging. The ACS exhibits significant geometrical distortions due to its off-axis location in the HST focal plane; correcting for these distortions by resampling the pixel values onto an undistorted frame results in pixel correlations tha...

  4. Variance optimal stopping for geometric Levy processes

    DEFF Research Database (Denmark)

    Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund

    2015-01-01

    The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...

  5. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  6. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  7. Linear Minimum variance estimation fusion

    Institute of Scientific and Technical Information of China (English)

    ZHU Yunmin; LI Xianrong; ZHAO Juan

    2004-01-01

    This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.

  8. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    Science.gov (United States)

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  9. Reduction of advanced-glycation end products levels and inhibition of RAGE signaling decreases rat vascular calcification induced by diabetes.

    Directory of Open Access Journals (Sweden)

    Mathieu R Brodeur

    Full Text Available Advanced-glycation end products (AGEs were recently implicated in vascular calcification, through a process mediated by RAGE (receptor for AGEs. Although a correlation between AGEs levels and vascular calcification was established, there is no evidence that reducing in vivo AGEs deposition or inhibiting AGEs-RAGE signaling pathways can decrease medial calcification. We evaluated the impact of inhibiting AGEs formation by pyridoxamine or elimination of AGEs by alagebrium on diabetic medial calcification. We also evaluated if the inhibition of AGEs-RAGE signaling pathways can prevent calcification. Rats were fed a high fat diet during 2 months before receiving a low dose of streptozotocin. Then, calcification was induced with warfarin. Pyridoxamine was administered at the beginning of warfarin treatment while alagebrium was administered 3 weeks after the beginning of warfarin treatment. Results demonstrate that AGEs inhibitors prevent the time-dependent accumulation of AGEs in femoral arteries of diabetic rats. This effect was accompanied by a reduced diabetes-accelerated calcification. Ex vivo experiments showed that N-methylpyridinium, an agonist of RAGE, induced calcification of diabetic femoral arteries, a process inhibited by antioxidants and different inhibitors of signaling pathways associated to RAGE activation. The physiological importance of oxidative stress was demonstrated by the reduction of femoral artery calcification in diabetic rats treated with apocynin, an inhibitor of reactive oxygen species production. We demonstrated that AGE inhibitors prevent or limit medial calcification. We also showed that diabetes-accelerated calcification is prevented by antioxidants. Thus, inhibiting the association of AGE-RAGE or the downstream signaling reduced medial calcification in diabetes.

  10. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  11. Seasonal variance in P system models for metapopulations

    Institute of Scientific and Technical Information of China (English)

    Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri

    2007-01-01

    Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.

  12. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  13. Comparison of desired radiographic advancement distance and true advancement distance required for patellar tendon-tibial plateau angle reduction to the ideal 90° in dogs by use of the modified Maquet technique.

    Science.gov (United States)

    Pillard, Paul; Livet, Veronique; Cabon, Quentin; Bismuth, Camille; Sonet, Juliette; Remy, Denise; Fau, Didier; Carozzo, Claude; Viguier, Eric; Cachon, Thibaut

    2016-12-01

    OBJECTIVE To evaluate the validity of 2 radiographic methods for measurement of the tibial tuberosity advancement distance required to achieve a reduction in patellar tendon-tibial plateau angle (PTA) to the ideal 90° in dogs by use of the modified Maquet technique (MMT). SAMPLE 24 stifle joints harvested from 12 canine cadavers. PROCEDURES Radiographs of stifle joints placed at 135° in the true lateral position were used to measure the required tibial tuberosity advancement distance with the conventional (A(M)) and correction (A(E)) methods. The MMT was used to successively advance the tibial crest to A(M) and A(E). Postoperative PTA was measured on a mediolateral radiograph for each advancement measurement method. If none of the measurements were close to 90°, the advancement distance was modified until the PTA was equal to 90° within 0.1°, and the true advancement distance (TA) was measured. Results were used to determine the optimal commercially available size of cage implant that would be used in a clinical situation. RESULTS Median A(M) and A(E) were 10.6 mm and 11.5 mm, respectively. Mean PTAs for the conventional and correction methods were 93.4° and 92.3°, respectively, and differed significantly from 90°. Median TA was 13.5 mm. The A(M) and A(E) led to the same cage size recommendations as for TA for only 1 and 4 stifle joints, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Both radiographic methods of measuring the distance required to advance the tibial tuberosity in dogs led to an under-reduction in postoperative PTA when the MMT was used. A new, more accurate radiographic method needs to be developed.

  14. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  15. The Correct Kriging Variance Estimated by Bootstrapping

    NARCIS (Netherlands)

    den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.

    2004-01-01

    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi

  16. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...

  17. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  18. 21 CFR 1010.4 - Variances.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...

  19. Pipeline to assess the greatest source of technical variance in quantitative proteomics using metabolic labelling.

    Science.gov (United States)

    Russell, Matthew R; Lilley, Kathryn S

    2012-12-21

    The biological variance in protein expression of interest to biologists can only be accessed if the technical variance of the protein quantification method is low compared with the biological variance. Technical variance is dependent on the protocol employed within a quantitative proteomics experiment and accumulated with every additional step. The magnitude of additional variance incurred by each step of a protocol should be determined to enable design of experiments maximally sensitive to differential protein expression. Metabolic labelling techniques for MS based quantitative proteomics enable labelled and unlabelled samples to be combined at the tissue level. It has been widely assumed, although not yet empirically verified, that early combination of samples minimises technical variance in relative quantification. This study presents a pipeline to determine the variance incurred at each stage of a common quantitative proteomics protocol involving metabolic labelling. We apply this pipeline to determine whether early combination of samples in a protocol leads to significant reduction in experimental variance. We also identify which stage within the protocol is associated with maximum variance. This provides a blueprint by which the variance associated with each stage of any protocol can be dissected and utilised to influence optimal experimental design.

  20. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  1. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  2. Functional analysis of variance for association studies.

    Directory of Open Access Journals (Sweden)

    Olga A Vsevolozhskaya

    Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  3. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  4. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...

  5. A Variance Based Active Learning Approach for Named Entity Recognition

    Science.gov (United States)

    Hassanzadeh, Hamed; Keyvanpour, Mohammadreza

    The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.

  6. Reduction of organic trace compounds and fresh water consumption by recovery of advanced oxidation processes treated industrial wastewater.

    Science.gov (United States)

    Bierbaum, S; Öller, H-J; Kersten, A; Klemenčič, A Krivograd

    2014-01-01

    Ozone (O(3)) has been used successfully in advanced wastewater treatment in paper mills, other sectors and municipalities. To solve the water problems of regions lacking fresh water, wastewater treated by advanced oxidation processes (AOPs) can substitute fresh water in highly water-consuming industries. Results of this study have shown that paper strength properties are not impaired and whiteness is slightly impaired only when reusing paper mill wastewater. Furthermore, organic trace compounds are becoming an issue in the German paper industry. The results of this study have shown that AOPs are capable of improving wastewater quality by reducing organic load, colour and organic trace compounds.

  7. 13 CFR 307.22 - Variances.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....

  8. Reducing variance in batch partitioning measurements

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  9. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  10. Discrimination of frequency variance for tonal sequencesa)

    OpenAIRE

    Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.

    2014-01-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...

  11. Aeroelastic Modelling and Comparison of Advanced Active Flap Control Concepts for Load Reduction on the Upwind 5MW Wind Turbine

    NARCIS (Netherlands)

    Barlas, A.; Van Kuik, G.A.M.

    2009-01-01

    A newly developed comprehensive aeroelastic model is used to investigate active flap concepts on the Upwind 5MW reference wind turbine. The model is specially designed to facilitate distributed control concepts and advanced controller design. Different concepts of centralized and distributed control

  12. Aeroelastic modelling and comparison of advanced active flap control concepts for load reduction on the Upwind 5MW wind turbine

    NARCIS (Netherlands)

    Barlas, A.; van Kuik, G.A.M.

    2009-01-01

    A newly developed comprehensive aeroelastic model is used to investigate active flap concepts on the Upwind 5MW reference wind turbine. The model is specially designed to facilitate distributed control concepts and advanced controller design. Different concepts of centralized and distributed control

  13. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  14. Ion-exchanged route synthesis of Fe2N-N-doped graphitic nanocarbons composite as advanced oxygen reduction electrocatalyst.

    Science.gov (United States)

    Wang, Lei; Yin, Jie; Zhao, Lu; Tian, Chungui; Yu, Peng; Wang, Jianqiang; Fu, Honggang

    2013-04-14

    Fe2N nanoparticles and nitrogen-doped graphitic nanosheet composites (Fe2N-NGC) have been synthesized by an ion-exchanged route, which can serve as an efficient non-precious metal electrocatalyst with a 4e(-) reaction pathway for oxygen reduction reactions (ORR).

  15. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  16. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  17. Discrimination of frequency variance for tonal sequences.

    Science.gov (United States)

    Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A

    2014-12-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) >  σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.

  18. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  19. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  20. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  1. Giardia duodenalis: Number and Fluorescence Reduction Caused by the Advanced Oxidation Process (H2O2/UV)

    Science.gov (United States)

    Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; dos Santos, Luciana Urbano

    2014-01-01

    This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ = 254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone. PMID:27379301

  2. Simulated flight acoustic investigation of treated ejector effectiveness on advanced mechanical suppresors for high velocity jet noise reduction

    Science.gov (United States)

    Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.

    1986-01-01

    Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.

  3. Summary Report of Advanced Hydropower Innovations and Cost Reduction Workshop at Arlington, VA, November 5 & 6, 2015

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Patrick [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rugani, Kelsey [Kearns & West, Inc., San Francisco, CA (United States); West, Anna [Kearns & West, Inc., San Francisco, CA (United States)

    2016-03-01

    On behalf of the U.S. Department of Energy (DOE) Wind and Water Power Technology Office (WWPTO), Oak Ridge National Laboratory (ORNL), hosted a day and half long workshop on November 5 and 6, 2015 in the Washington, D.C. metro area to discuss cost reduction opportunities in the development of hydropower projects. The workshop had a further targeted focus on the costs of small, low-head1 facilities at both non-powered dams (NPDs) and along undeveloped stream reaches (also known as New Stream-Reach Development or “NSD”). Workshop participants included a cross-section of seasoned experts, including project owners and developers, engineering and construction experts, conventional and next-generation equipment manufacturers, and others to identify the most promising ways to reduce costs and achieve improvements for hydropower projects.

  4. Managing product inherent variance during treatment

    NARCIS (Netherlands)

    Verdenius, F.

    1996-01-01

    The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of

  5. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  6. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  7. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...

  8. Advanced-warning system risk-reduction experiments: the Multispectral Measurements Program (MSMP) and the Balloon Altitude Mosaic Measurements (BAMM)

    Science.gov (United States)

    Hasegawa, Ken R.

    2000-12-01

    MSMP and BAMM were commissioned by the Air Force Space Division (AFSD) in the late seventies to generate data in support of the Advanced Warning System (AWS), a development activity to replace the space-based surveillance satellites of the Defense Support Program (DSP). These programs were carried out by the Air Force Geophysics Laboratory with planning and mentoring by Irving Spiro of The Aerospace Corporation, acting on behalf of the program managers, 1st Lt. Todd Frantz, 1st Lt. Gordon Frantom, and 1st Lt. Ken Hasegawa of the technology program office at AFSD. The motivation of MSMP was the need for characterizing the exhaust plumes of the thrusters aboard post-boost vehicles, a primary target for the infrared sensors of the proposed AWS system. To that end, the experiments consisted of a series of Aries rocket launches from White Sands Missile Range in which dual payloads were carried aloft and separately deployed at altitudes above 100 km. One module contained an ensemble of sensors spanning the spectrum from the vacuum ultraviolet to the long wave infrared, all slaved to an rf tracker locked onto a beacon on the target module. The target was a small pressure-fed liquid-propellant rocket engine, a modified Atlas vernier, programmed for a series of maneuvers in the vicinity of the instrument module. As part of this program, diagnostic measurements of the target engine exhaust were made at Rocketdyne, and shock tube experiments on excitation processes were carried out by staff members of Calspan.

  9. Advanced metal artifact reduction MRI of metal-on-metal hip resurfacing arthroplasty implants: compressed sensing acceleration enables the time-neutral use of SEMAC

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Jan; Thawait, Gaurav K. [Johns Hopkins University School of Medicine, Russell H. Morgan Department of Radiology and Radiological Science, Section of Musculoskeletal Radiology, Baltimore, MD (United States); Fritz, Benjamin [University of Freiburg, Department of Radiology, Freiburg im Breisgau (Germany); Raithel, Esther; Nittka, Mathias [Siemens Healthcare GmbH, Erlangen (Germany); Gilson, Wesley D. [Siemens Healthcare USA, Inc., Baltimore, MD (United States); Mont, Michael A. [Cleveland Clinic Foundation, Department of Orthopedic Surgery, Cleveland, OH (United States)

    2016-10-15

    Compressed sensing (CS) acceleration has been theorized for slice encoding for metal artifact correction (SEMAC), but has not been shown to be feasible. Therefore, we tested the hypothesis that CS-SEMAC is feasible for MRI of metal-on-metal hip resurfacing implants. Following prospective institutional review board approval, 22 subjects with metal-on-metal hip resurfacing implants underwent 1.5 T MRI. We compared CS-SEMAC prototype, high-bandwidth TSE, and SEMAC sequences with acquisition times of 4-5, 4-5 and 10-12 min, respectively. Outcome measures included bone-implant interfaces, image quality, periprosthetic structures, artifact size, and signal- and contrast-to-noise ratios (SNR and CNR). Using Friedman, repeated measures analysis of variances, and Cohen's weighted kappa tests, Bonferroni-corrected p-values of 0.005 and less were considered statistically significant. There was no statistical difference of outcomes measures of SEMAC and CS-SEMAC images. Visibility of implant-bone interfaces and pseudocapsule as well as fat suppression and metal reduction were ''adequate'' to ''good'' on CS-SEMAC and ''non-diagnostic'' to ''adequate'' on high-BW TSE (p < 0.001, respectively). SEMAC and CS-SEMAC showed mild blur and ripple artifacts. The metal artifact size was 63 % larger for high-BW TSE as compared to SEMAC and CS-SEMAC (p < 0.0001, respectively). CNRs were sufficiently high and statistically similar, with the exception of CNR of fluid and muscle and CNR of fluid and tendon, which were higher on intermediate-weighted high-BW TSE (p < 0.005, respectively). Compressed sensing acceleration enables the time-neutral use of SEMAC for MRI of metal-on-metal hip resurfacing implants when compared to high-BW TSE and image quality similar to conventional SEMAC. (orig.)

  10. NiCo2O4/N-doped graphene as an advanced electrocatalyst for oxygen reduction reaction

    Science.gov (United States)

    Zhang, Hui; Li, Huiyong; Wang, Haiyan; He, Kejian; Wang, Shuangyin; Tang, Yougen; Chen, Jiajie

    2015-04-01

    Developing low-cost catalyst for high-performance oxygen reduction reaction (ORR) is highly desirable. Herein, NiCo2O4/N-doped reduced graphene oxide (NiCo2O4/N-rGO) hybrid is proposed as a high-performance catalyst for ORR for the first time. The well-formed NiCo2O4/N-rGO hybrid is studied by cyclic voltammetry (CV) curves and linear-sweep voltammetry (LSV) performed on the rotating-ring-disk-electrode (RDE) in comparison with N-rGO-free NiCo2O4 and the bare N-rGO. Due to the synergistic effect, the NiCo2O4/N-rGO hybrid exhibits significant improvement of catalytic performance with an onset potential of -0.12 V, which mainly favors a direct four electron pathway in ORR process, close to the behavior of commercial carbon-supported Pt. Also, the benefits of N-incorporation are investigated by comparing NiCo2O4/N-rGO with NiCo2O4/rGO, where higher cathodic currents, much more positive half-wave potential and more electron transfer numbers are observed for the N-doping one, which should be ascribed to the new highly efficient active sites created by N incorporation into graphene. The NiCo2O4/N-rGO hybrid could be used as a promising catalyst for high power metal/air battery.

  11. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  12. High-dimensional regression with unknown variance

    CERN Document Server

    Giraud, Christophe; Verzelen, Nicolas

    2011-01-01

    We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.

  13. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  14. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  15. Fractional constant elasticity of variance model

    OpenAIRE

    Ngai Hang Chan; Chi Tim Ng

    2007-01-01

    This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.

  16. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  17. The Parabolic variance (PVAR), a wavelet variance based on least-square fit

    CERN Document Server

    Vernotte, F; Bourgeois, P -Y; Rubiola, E

    2015-01-01

    The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...

  18. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  19. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  20. A relation between information entropy and variance

    CERN Document Server

    Pandey, Biswajit

    2016-01-01

    We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.

  1. Effectiveness of Losartan-Loaded Hyaluronic Acid (HA) Micelles for the Reduction of Advanced Hepatic Fibrosis in C3H/HeN Mice Model.

    Science.gov (United States)

    Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon

    2015-01-01

    Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis.

  2. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  3. Mindfulness-Based Stress Reduction in Advanced Nursing Practice: A Nonpharmacologic Approach to Health Promotion, Chronic Disease Management, and Symptom Control.

    Science.gov (United States)

    Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula

    2015-09-01

    The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary.

  4. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  5. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  6. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    Science.gov (United States)

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  7. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    Science.gov (United States)

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  8. Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera

    Science.gov (United States)

    Marchitto, T. M.; Grist, H. R.; van Geen, A.

    2013-12-01

    Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.

  9. Variance Estimation Using Refitted Cross-validation in Ultrahigh Dimensional Regression

    CERN Document Server

    Fan, Jianqing; Hao, Ning

    2010-01-01

    Variance estimation is a fundamental problem in statistical modeling. In ultrahigh dimensional linear regressions where the dimensionality is much larger than sample size, traditional variance estimation techniques are not applicable. Recent advances on variable selection in ultrahigh dimensional linear regressions make this problem accessible. One of the major problems in ultrahigh dimensional regression is the high spurious correlation between the unobserved realized noise and some of the predictors. As a result, the realized noises are actually predicted when extra irrelevant variables are selected, leading to serious underestimate of the noise level. In this paper, we propose a two-stage refitted procedure via a data splitting technique, called refitted cross-validation (RCV), to attenuate the influence of irrelevant variables with high spurious correlations. Our asymptotic results show that the resulting procedure performs as well as the oracle estimator, which knows in advance the mean regression functi...

  10. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  11. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  12. 40 CFR 190.11 - Variances for unusual operations.

    Science.gov (United States)

    2010-07-01

    ... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...

  13. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    Science.gov (United States)

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  15. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, First quarter 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  16. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    Science.gov (United States)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  17. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    Science.gov (United States)

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  18. An Analysis of Variance Framework for Matrix Sampling.

    Science.gov (United States)

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  19. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  20. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  1. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  2. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  3. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  4. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  5. Utility functions predict variance and skewness risk preferences in monkeys.

    Science.gov (United States)

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  6. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    NARCIS (Netherlands)

    Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam

  7. High-fidelity Simulation of Jet Noise from Rectangular Nozzles . [Large Eddy Simulation (LES) Model for Noise Reduction in Advanced Jet Engines and Automobiles

    Science.gov (United States)

    Sinha, Neeraj

    2014-01-01

    This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.

  8. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...

  9. Filtered kriging for spatial data with heterogeneous measurement error variances.

    Science.gov (United States)

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  10. Meta-analysis of ratios of sample variances.

    Science.gov (United States)

    Prendergast, Luke A; Staudte, Robert G

    2016-05-20

    When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.

  11. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  12. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  13. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  14. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  15. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  16. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    Science.gov (United States)

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  17. Detecting Pulsars with Interstellar Scintillation in Variance Images

    CERN Document Server

    Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E

    2016-01-01

    Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...

  18. Variance estimation in neutron coincidence counting using the bootstrap method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)

    2015-09-11

    In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.

  19. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  20. Multiperiod mean-variance efficient portfolios with endogenous liabilities

    OpenAIRE

    Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo

    2011-01-01

    We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...

  1. Testing for Causality in Variance Usinf Multivariate GARCH Models

    OpenAIRE

    Christian M. Hafner; Herwartz, Helmut

    2008-01-01

    Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...

  2. Testing for causality in variance using multivariate GARCH models

    OpenAIRE

    Hafner, Christian; Herwartz, H.

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...

  3. Dimension free and infinite variance tail estimates on Poisson space

    OpenAIRE

    Breton, J. C.; Houdré, C.; Privault, N.

    2004-01-01

    Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...

  4. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  5. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  6. Effect of advanced aircraft noise reduction technology on the 1990 projected noise environment around Patrick Henry Airport. [development of noise exposure forecast contours for projected traffic volume and aircraft types

    Science.gov (United States)

    Cawthorn, J. M.; Brown, C. G.

    1974-01-01

    A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.

  7. Minimum variance system identification with application to digital adaptive flight control

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1975-01-01

    A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.

  8. Innovative clean coal technology: 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Final report, Phases 1 - 3B

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-01

    This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.

  9. Variance-Constrained Multiobjective Control and Filtering for Nonlinear Stochastic Systems: A Survey

    Directory of Open Access Journals (Sweden)

    Lifeng Ma

    2013-01-01

    Full Text Available The multiobjective control and filtering problems for nonlinear stochastic systems with variance constraints are surveyed. First, the concepts of nonlinear stochastic systems are recalled along with the introduction of some recent advances. Then, the covariance control theory, which serves as a practical method for multi-objective control design as well as a foundation for linear system theory, is reviewed comprehensively. The multiple design requirements frequently applied in engineering practice for the use of evaluating system performances are introduced, including robustness, reliability, and dissipativity. Several design techniques suitable for the multi-objective variance-constrained control and filtering problems for nonlinear stochastic systems are discussed. In particular, as a special case for the multi-objective design problems, the mixed H2/H∞ control and filtering problems are reviewed in great detail. Subsequently, some latest results on the variance-constrained multi-objective control and filtering problems for the nonlinear stochastic systems are summarized. Finally, conclusions are drawn, and several possible future research directions are pointed out.

  10. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Third quarterly technical progress report

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, LNB, and LNB plus AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu with fly ash LOI values of approximately 8 percent. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing in the LNB+AOFA configuration indicate that at full-load, NO{sub x} emissions and fly ash LOI are near 0.40 lb/MBtu and 8 percent, respectively. However, it is believed that a substantial portion of the incremental change in NO{sub x} emissions between the LNB and LNB+AOFA configurations is the result of additional burner tuning and other operational adjustments and is not the result of the AOFA system. During this quarter, LNB+AOFA testing was concluded. Testing performed during this quarter included long-term and verification testing in the LNB+AOFA configuration.

  11. Contrast agent and radiation dose reduction in abdominal CT by a combination of low tube voltage and advanced image reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Buls, Nico; Gompel, Gert van; Nieboer, Koenraad; Willekens, Inneke; Mey, Johan de [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Cauteren, Toon van [Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Verfaillie, Guy [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Evans, Paul; Macholl, Sven; Newton, Ben [GE Healthcare, Department of Medical Diagnostics, Amersham, Buckinghamshire (United Kingdom)

    2015-04-01

    To assess image quality in abdominal CT at low tube voltage combined with two types of iterative reconstruction (IR) at four reduced contrast agent dose levels. Minipigs were scanned with standard 320 mg I/mL contrast concentration at 120 kVp, and with reduced formulations of 120, 170, 220 and 270 mg I/mL at 80 kVp with IR. Image quality was assessed by CT value, dose normalized contrast and signal to noise ratio (CNRD and SNRD) in the arterial and venous phases. Qualitative analysis was included by expert reading. Protocols with 170 mg I/mL or higher showed equal or superior CT values: aorta (278-468 HU versus 314 HU); portal vein (205-273 HU versus 208 HU); liver parenchyma (122-146 HU versus 115 HU). In the aorta, all 170 mg I/mL protocols or higher yielded equal or superior CNRD (15.0-28.0 versus 13.7). In liver parenchyma, all study protocols resulted in higher SNRDs. Radiation dose could be reduced from standard CTDI{sub vol} = 7.8 mGy (6.2 mSv) to 7.6 mGy (5.2 mSv) with 170 mg I/mL. Combining 80 kVp with IR allows at least a 47 % contrast agent dose reduction and 16 % radiation dose reduction for images of comparable quality. (orig.)

  12. Variance-based fingerprint distance adjustment algorithm for indoor localization

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang

    2015-01-01

    The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.

  13. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    Science.gov (United States)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  14. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  15. CMB-S4 and the Hemispherical Variance Anomaly

    CERN Document Server

    O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D

    2016-01-01

    Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...

  16. Saturation of number variance in embedded random-matrix ensembles

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  17. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  18. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  19. Saturation of number variance in embedded random-matrix ensembles.

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  20. Expectation Values and Variance Based on Lp-Norms

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2012-11-01

    Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.

  1. Models of Postural Control: Shared Variance in Joint and COM Motions.

    Science.gov (United States)

    Kilby, Melissa C; Molenaar, Peter C M; Newell, Karl M

    2015-01-01

    This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM) motions was analyzed using multivariate canonical correlation analysis (CCA). The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF), namely, an inverted pendulum ankle model (2DOF), ankle-hip model (4DOF), ankle-knee-hip model (5DOF), and ankle-knee-hip-neck model (7DOF). Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway) on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions.

  2. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  3. Recursive identification for multidimensional ARMA processes with increasing variances

    Institute of Scientific and Technical Information of China (English)

    CHEN Hanfu

    2005-01-01

    In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β> 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.

  4. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  5. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  6. On Variance and Covariance for Bounded Linear Operators

    Institute of Scientific and Technical Information of China (English)

    Chia Shiang LIN

    2001-01-01

    In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.

  7. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    Science.gov (United States)

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  8. Innovative Clean Coal Technology (ICCT): 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    1992-11-25

    This quarterly report discusses the technical progress of a US Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) Project demonstrating advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from a coal-fired boiler. The project is being conducted at Gulf Power Company's Plant Lansing Smith Unit 2 located near Panama City, Florida. The primary objective of this demonstration is to determine the long-term effects of commercially available tangentially-fired low NO[sub x] combustion technologies on NO[sub x] emissions and boiler performance. A target of achieving fifty percent NO[sub x] reduction using combustion modifications has been established for the project. The stepwise approach that is being used to evaluate the NO[sub x] control technologies requires three plant outages to successively install the test instrumentation and the different levels of the low NO[sub x] concentric firing system (LNCFS). Following each outage, a series of four groups of tests are performed. These are (1) diagnostic, (2) performance, (3) long-term, and (4) verification. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. This technical progress report presents the LNCFS Level I short-term data collected during this quarter. In addition, a comparison of all the long-term emissions data that have been collected to date is included.

  9. Innovative Clean Coal Technology (ICCT): 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, second quarter 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-11-25

    This quarterly report discusses the technical progress of a US Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) Project demonstrating advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from a coal-fired boiler. The project is being conducted at Gulf Power Company`s Plant Lansing Smith Unit 2 located near Panama City, Florida. The primary objective of this demonstration is to determine the long-term effects of commercially available tangentially-fired low NO{sub x} combustion technologies on NO{sub x} emissions and boiler performance. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The stepwise approach that is being used to evaluate the NO{sub x} control technologies requires three plant outages to successively install the test instrumentation and the different levels of the low NO{sub x} concentric firing system (LNCFS). Following each outage, a series of four groups of tests are performed. These are (1) diagnostic, (2) performance, (3) long-term, and (4) verification. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. This technical progress report presents the LNCFS Level I short-term data collected during this quarter. In addition, a comparison of all the long-term emissions data that have been collected to date is included.

  10. Rich Reduction

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2016-01-01

    Managing and, ideally, explaining phonetic variation has ever since been a key issue in the speech sciences. In this context, the major contribution of Lindblom's H&H theory was to replace the futile search for invariance by an explainable variance based on the tug-of-war metaphor. Recent empirical...

  11. Nonsymbolic number and cumulative area representations contribute shared and unique variance to symbolic math competence.

    Science.gov (United States)

    Lourenco, Stella F; Bonny, Justin W; Fernandez, Edmund P; Rao, Sonia

    2012-11-13

    Humans and nonhuman animals share the capacity to estimate, without counting, the number of objects in a set by relying on an approximate number system (ANS). Only humans, however, learn the concepts and operations of symbolic mathematics. Despite vast differences between these two systems of quantification, neural and behavioral findings suggest functional connections. Another line of research suggests that the ANS is part of a larger, more general system of magnitude representation. Reports of cognitive interactions and common neural coding for number and other magnitudes such as spatial extent led us to ask whether, and how, nonnumerical magnitude interfaces with mathematical competence. On two magnitude comparison tasks, college students estimated (without counting or explicit calculation) which of two arrays was greater in number or cumulative area. They also completed a battery of standardized math tests. Individual differences in both number and cumulative area precision (measured by accuracy on the magnitude comparison tasks) correlated with interindividual variability in math competence, particularly advanced arithmetic and geometry, even after accounting for general aspects of intelligence. Moreover, analyses revealed that whereas number precision contributed unique variance to advanced arithmetic, cumulative area precision contributed unique variance to geometry. Taken together, these results provide evidence for shared and unique contributions of nonsymbolic number and cumulative area representations to formally taught mathematics. More broadly, they suggest that uniquely human branches of mathematics interface with an evolutionarily primitive general magnitude system, which includes partially overlapping representations of numerical and nonnumerical magnitude.

  12. VOCs elimination and health risk reduction in e-waste dismantling workshop using integrated techniques of electrostatic precipitation with advanced oxidation technologies.

    Science.gov (United States)

    Chen, Jiangyao; Huang, Yong; Li, Guiying; An, Taicheng; Hu, Yunkun; Li, Yunlu

    2016-01-25

    Volatile organic compounds (VOCs) emitted during the electronic waste dismantling process (EWDP) were treated at a pilot scale, using integrated electrostatic precipitation (EP)-advanced oxidation technologies (AOTs, subsequent photocatalysis (PC) and ozonation). Although no obvious alteration was seen in VOC concentration and composition, EP technology removed 47.2% of total suspended particles, greatly reducing the negative effect of particles on subsequent AOTs. After the AOT treatment, average removal efficiencies of 95.7%, 95.4%, 87.4%, and 97.5% were achieved for aromatic hydrocarbons, aliphatic hydrocarbons, halogenated hydrocarbons, as well as nitrogen- and oxygen-containing compounds, respectively, over 60-day treatment period. Furthermore, high elimination capacities were also seen using hybrid technique of PC with ozonation; this was due to the PC unit's high loading rates and excellent pre-treatment abilities, and the ozonation unit's high elimination capacity. In addition, the non-cancer and cancer risks, as well as the occupational exposure cancer risk, for workers exposed to emitted VOCs in workshop were reduced dramatically after the integrated technique treatment. Results demonstrated that the integrated technique led to highly efficient and stable VOC removal from EWDP emissions at a pilot scale. This study points to an efficient approach for atmospheric purification and improving human health in e-waste recycling regions.

  13. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    Science.gov (United States)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  14. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU Mixia; WANG Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  15. CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var

  16. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis

  17. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...

  18. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  19. Infinite variance in fermion quantum Monte Carlo calculations

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  20. Testing for causality in variance using multivariate GARCH models

    NARCIS (Netherlands)

    C.M. Hafner (Christian); H. Herwartz

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual

  1. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  2. Variance Ranklets : Orientation-selective rank features for contrast modulations

    NARCIS (Netherlands)

    Azzopardi, George; Smeraldi, Fabrizio

    2009-01-01

    We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their

  3. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...

  4. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s

  5. Average local values and local variances in quantum mechanics

    CERN Document Server

    Muga, J G; Sala, P R

    1998-01-01

    Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.

  6. Common Persistence and Error-Correction Mode in Conditional Variance

    Institute of Scientific and Technical Information of China (English)

    LI Han-dong; ZHANG Shi-ying

    2001-01-01

    We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.

  7. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si

  8. Infinite variance in fermion quantum Monte Carlo calculations.

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  9. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...

  10. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ... the drum.\\3\\ \\3\\ This variance adopts the definition of, and specifications for, fleet angle from... definition of ``static drop test'' specified by section 3 (``Definitions'') and the static drop test... FURTHER INFORMATION CONTACT: General information and press inquiries. For general information and...

  11. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  12. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  13. Gender variance in Asia: discursive contestations and legal implications

    NARCIS (Netherlands)

    S.E. Wieringa

    2010-01-01

    A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli

  14. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic

  15. Third-generation dual-source 70-kVp chest CT angiography with advanced iterative reconstruction in young children: image quality and radiation dose reduction

    Energy Technology Data Exchange (ETDEWEB)

    Rompel, Oliver; Janka, Rolf; Lell, Michael M.; Uder, Michael; Hammon, Matthias [University Hospital Erlangen, Department of Radiology, Erlangen (Germany); Gloeckler, Martin; Dittrich, Sven [University Hospital Erlangen, Department of Pediatric Cardiology, Erlangen (Germany); Cesnjevar, Robert [University Hospital Erlangen, Department of Pediatric Cardiac Surgery, Erlangen (Germany)

    2016-04-15

    Many technical updates have been made in multi-detector CT. To evaluate image quality and radiation dose of high-pitch second- and third-generation dual-source chest CT angiography and to assess the effects of different levels of advanced modeled iterative reconstruction (ADMIRE) in newborns and children. Chest CT angiography (70 kVp) was performed in 42 children (age 158 ± 267 days, range 1-1,194 days). We evaluated subjective and objective image quality, and radiation dose with filtered back projection (FBP) and different strength levels of ADMIRE. For comparison were 42 matched controls examined with a second-generation 128-slice dual-source CT-scanner (80 kVp). ADMIRE demonstrated improved objective and subjective image quality (P <.01). Mean signal/noise, contrast/noise and subjective image quality were 11.9, 10.0 and 1.9, respectively, for the 80 kVp mode and 11.2, 10.0 and 1.9 for the 70 kVp mode. With ADMIRE, the corresponding values for the 70 kVp mode were 13.7, 12.1 and 1.4 at strength level 2 and 17.6, 15.6 and 1.2 at strength level 4. Mean CTDI{sub vol}, DLP and effective dose were significantly lower with the 70-kVp mode (0.31 mGy, 5.33 mGy*cm, 0.36 mSv) compared to the 80-kVp mode (0.46 mGy, 9.17 mGy*cm, 0.62 mSv; P <.01). The third-generation dual-source CT at 70 kVp provided good objective and subjective image quality at lower radiation exposure. ADMIRE improved objective and subjective image quality. (orig.)

  16. Effect of Variances and Manufacturing Tolerances on the Design Strength and Life of Mechanically Fastened Composite Joints

    Science.gov (United States)

    1978-12-01

    AD-A041/70 ,4 P𔄁operty of US Air .For, AAIWZ L1 brary AFFDLTR 78-179 ’Wrlght.Peatt Orson AF’B, EFFECT OF VARIANCES AND MANUFACTURING TOLERANCES ON...Degradation For Advanced Composites", Lockheed-California F33615-77-C-3084, Quar- terlies 1977 to Present. Phillips, D. C. and Scott , J. M., "The Shear

  17. Facile synthesis of N-rich carbon quantum dots by spontaneous polymerization and incision of solvents as efficient bioimaging probes and advanced electrocatalysts for oxygen reduction reaction

    Science.gov (United States)

    Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi

    2016-01-01

    In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR.In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread

  18. 肿瘤中Axin表达减少的机制及其研究进展%Advances in research on mechanisms of Axin reduction in tumor

    Institute of Scientific and Technical Information of China (English)

    周明祎

    2011-01-01

    a tumor inhibitor, Axin protein expression decreases in many malignant carcinoma. The mechanism of Axin reduction is still undear. It may be associated with gene mutation, promoter methylation, protein degradation, and various small molecules. This review mainly summarized the latest progress in research on mechanism of Axin reduction.

  19. Advances in air pollution abatement. Production-integrated emission reduction and waste gas cleaning. Proceedings; Fortschritte in der Luftreinhaltetechnik. Produktionsintegrierte Emissionsminderung und Abgasreinigung. Tagungsband

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    Conferences on emission reduction have been held by the Kommission Reinhaltung der Luft im VDI and the DIN-Normenausschuss KRdL since 1960. Recent European regulations as well as the TA Luft 2002 set the current boundary conditions. They necessitate that primary measures, i.e. measures integrated in the production process, must be fully utilized. Examples are presented and discussed. As a rule, the exhaust purification systems must be improved as well. This colloquium presented improvements and new developments that were successfully tested in practice and have become an established state-of-the-art technology. This proceedings volume contains the papers read at the colloquium of 19/20 November 2002, in Fulda, as well as the long versions of the posters presented there. [German] Seit 1960 werden von der Kommission Reinhaltung der Luft im VDI und DIN - Normenausschuss KRdL regelmaessig Veranstaltungen durchgefuehrt, um ueber die Fortschritte in der Luftreinhaltetechnik, insbesondere bei Verfahren zur Emissionsminderung von Schadstoffen aus industriellen und gewerblichen Prozessen, zu berichten. Neben den europaeischen Rahmenbedingungen fuer den Umweltschutz stellt die TA Luft 2002 neue Anforderungen an emissionsmindernde Massnahmen. Um die geforderten verringerten Emissionswerte zu erreichen, muessen die Moeglichkeiten des produktionsintegrierten Umweltschutzes ('primaere Massnahmen') vorab ausgeschoepft werden. Beispiele fuer produktionsintegrierte Loesungen zur Emissionsminderung werden vorgestellt und diskutiert. In der Regel sind verfahrenstechnische Verbesserungen der Abgasreinigungsanlagen zusaetzlich unumgaenglich. Zentrales Anliegen des Kolloquiums ist es, die Verbesserungen und Neuentwicklungen des gesamten Spektrums der Emissionsminderung vorzustellen, die mit Erfolg in der Praxis eingefuehrt wurden und sich als Stand der Technik etabliert haben. Der vorliegende Tagungsband enthaelt die Vortraege, die anlaesslich des Kolloquiums &apos

  20. Logistics Reduction and Repurposing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Advanced Exploration Systems (AES) Logistics Reduction and Repurposing (LRR) project will enable a mission-independent cradle-to-grave-to-cradle...

  1. Efeitos da Correção de Dados na Redução da Heterogeneidade das Variâncias Genética, Ambiental e Fenotípica em Testes de Progênies de Eucalyptus grandis W. Hill ex Maiden Effects of Different Data Transformation Methods on the Reduction of the Genetic, Environmental and Phenotypic Variance in the Progeny Trial of Eucalyptus grandis W. Hill ex Maiden

    Directory of Open Access Journals (Sweden)

    José Elidney Pinto Júnior

    2011-03-01

    recursos. A variabilidade genética presente foi representada pelos valores moderados obtidos de herdabilidade individual, no sentido restrito, para o crescimento em diâmetro à altura do peito (DAP, nos três locais estudados. A adoção de estratégias e critérios propostos à seleção permitirá compor uma População Selecionada com duzentos indivíduos de maiores valores genéticos, com número efetivo de progênies adequado, propiciando ganhos para DAP entre 12,89% a 24,33%, em relação à média experimental, no estabelecimento de um Pomar de Sementes por Mudas. A seleção dos vinte indivíduos com os maiores valores genéticos aditivos, para o estabelecimento de um Pomar Clonal de Sementes, poderá propiciar ganhos para DAP entre 17,18% e 50,95%, em relação à média experimental. Por sua vez, a seleção dos vinte melhores indivíduos, com os maiores valores genotípicos, para o estabelecimento de um Jardim Clonal, poderá propiciar ganhos para DAP entre 22,40% a 82,16%, em relação à média experimental, para as plantações clonais resultantes do material selecionado em questão. 

This research work was developed in order to evaluate progeny trials of Eucalyptus grandis W. Hill ex Maiden using the software SELEGEN-REML/BLUP. The best trees were identified in order to be used in seedling and clonal orchards. Fifty three half-sib progenies of three Australian provenances were tested in the municipalities of Mogi Guaçu, Boa Esperança do Sul and Caçapava, all located in the State of São Paulo. A compacted families block experimental design was used with variable number of replicates, linear plots of six trees each, and a 3.00 x 2.00 m spacing. Two methods of data standardization or transformation were used in order to evaluate their efficiency in the reduction of the genetic, environmental and phenotypic variances. The transformation or  orrection of the data, performed with the ratio (hi/him between the square root of

  • Poverty Reduction

    OpenAIRE

    Ortiz, Isabel

    2007-01-01

    The paper reviews poverty trends and measurements, poverty reduction in historical perspective, the poverty-inequality-growth debate, national poverty reduction strategies, criticisms of the agenda and the need for redistribution, international policies for poverty reduction, and ultimately understanding poverty at a global scale. It belongs to a series of backgrounders developed at Joseph Stiglitz's Initiative for Policy Dialogue.

  • Research advance in non-thermal plasma induced selective catalytic reduction NOx with low hydrocarbon compounds%低温等离子体诱导低碳烃选择性催化还原NOx研究进展

    Institute of Scientific and Technical Information of China (English)

    苏清发; 刘亚敏; 陈杰; 潘华; 施耀

    2009-01-01

    The emission of nitrogen oxides (NOx) from stationary sources, primarily from power stations, industrial heaters and cogeneration plants, represents a major environmental problem. This paper intends to give a general review over the advances in non-thermal plasma assisted selective catalytic reduction (SCR) of NOx with lower hydrocarbon compounds. In the last decade, the non-thermal plasma induced SCR of nitrogen oxide with low hydrocarbon compounds has received much attention. The different hydrocarbons (≤C3) used in the research are discussed. As we know,methane is more difficultly activated than non-methane hydrocarbons, such as ethylene and propylene etc. The reduction mechanism is also discussed. In addition, aiming at the difficulties existed, the direction for future research is prospected.%综述了近年来低温等离子体诱导低碳烃选择性催化还原NOx的研究进展,详细介绍了难活化的甲烷及较易活化的非甲烷低碳烃气体如乙烯、丙烯及丙烷等的研究现状,探讨了低温等离子体诱导低碳烃选择性催化还原NOx的反应机理,并展望了低温等离子体诱导低碳烃选择性催化还原NOx今后研究方向.

  • A reduction in growth rate of Pseudomonas putida KT2442 counteracts productivity advances in medium-chain-length polyhydroxyalkanoate production from gluconate

    Directory of Open Access Journals (Sweden)

    Zinn Manfred

    2011-04-01

    Full Text Available Abstract Background The substitution of plastics based on fossil raw material by biodegradable plastics produced from renewable resources is of crucial importance in a context of oil scarcity and overflowing plastic landfills. One of the most promising organisms for the manufacturing of medium-chain-length polyhydroxyalkanoates (mcl-PHA is Pseudomonas putida KT2440 which can accumulate large amounts of polymer from cheap substrates such as glucose. Current research focuses on enhancing the strain production capacity and synthesizing polymers with novel material properties. Many of the corresponding protocols for strain engineering rely on the rifampicin-resistant variant, P. putida KT2442. However, it remains unclear whether these two strains can be treated as equivalent in terms of mcl-PHA production, as the underlying antibiotic resistance mechanism involves a modification in the RNA polymerase and thus has ample potential for interfering with global transcription. Results To assess PHA production in P. putida KT2440 and KT2442, we characterized the growth and PHA accumulation on three categories of substrate: PHA-related (octanoate, PHA-unrelated (gluconate and poor PHA substrate (citrate. The strains showed clear differences of growth rate on gluconate and citrate (reduction for KT2442 > 3-fold and > 1.5-fold, respectively but not on octanoate. In addition, P. putida KT2442 PHA-free biomass significantly decreased after nitrogen depletion on gluconate. In an attempt to narrow down the range of possible reasons for this different behavior, the uptake of gluconate and extracellular release of the oxidized product 2-ketogluconate were measured. The results suggested that the reason has to be an inefficient transport or metabolization of 2-ketogluconate while an alteration of gluconate uptake and conversion to 2-ketogluconate could be excluded. Conclusions The study illustrates that the recruitment of a pleiotropic mutation, whose effects might

  • The Evaluation Method of Advanced and Available Technologies for Energy Conservation and Emissions Reduction of Deinking Process%废纸脱墨工艺节能减排先进适用技术评估方法研究

    Institute of Scientific and Technical Information of China (English)

    赵吝加; 曾维华; 许乃中; 温宗国

    2012-01-01

    Currently, domestic environmental technologies evaluation methods are mainly based on experts' qualitative estimate and lack of comprehensive evaluation methods. By setting an indicator system of energy conservation and emissions reduction, determining the indicator weight, and constructing the evaluation factors set and its membership function based on AHP and Fuzzy comprehensive evaluation, an evaluation method of advanced and available technologies for energy conservation and emissions reduction of deinking process was established. U-sing this evaluation method, flotation method can be picked out as an advanced and appropriate technology for energy conservation and emissions reduction of deinking process from three technologies including washing, flotation-washing, and flotation. This evaluation method provides a basic method for decision making on deinking technologies evaluation and selection.%针对国内环境技术评估以定性判断为主,缺乏综合评估方法的状况,通过设置节能减排先进适用技术指标体系、确定指标权重、构建评估因素集及其隶属函数等过程,建立基于层次分析和模糊综合评估的定性与定量方法相结合的造纸行业废纸脱墨工艺节能减排先进适用技术评估方法.应用该方法,在洗涤法脱墨技术、浮选法脱墨技术、浮选-洗涤法脱墨技术3项技术中,筛选出浮选法脱墨技术作为废纸制浆脱墨工艺重点推广的节能减排先进适用技术,其余两项技术中,洗涤法脱墨技术优于浮选-洗涤法脱墨技术.

  • The return of the variance: intraspecific variability in community ecology.

    Science.gov (United States)

    Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie

    2012-04-01

    Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.

  • PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  • Optimization of radio astronomical observations using Allan variance measurements

    CERN Document Server

    Schieder, R

    2001-01-01

    Stability tests based on the Allan variance method have become a standard procedure for the evaluation of the quality of radio-astronomical instrumentation. They are very simple and simulate the situation when detecting weak signals buried in large noise fluctuations. For the special conditions during observations an outline of the basic properties of the Allan variance is given, and some guidelines how to interpret the results of the measurements are presented. Based on a rather simple mathematical treatment clear rules for observations in ``Position-Switch'', ``Beam-'' or ``Frequency-Switch'', ``On-The-Fly-'' and ``Raster-Mapping'' mode are derived. Also, a simple ``rule of the thumb'' for an estimate of the optimum timing for the observations is found. The analysis leads to a conclusive strategy how to plan radio-astronomical observations. Particularly for air- and space-borne observatories it is very important to determine, how the extremely precious observing time can be used with maximum efficiency. The...

  • Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  • Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  • Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  • Sample variance and Lyman-alpha forest transmission statistics

    CERN Document Server

    Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick

    2012-01-01

    We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.

  • Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  • Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...... to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary....

  • Fidelity between Gaussian mixed states with quantum state quadrature variances

    Science.gov (United States)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  • Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  • Climate variance influence on the non-stationary plankton dynamics.

    Science.gov (United States)

    Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine

    2013-08-01

    We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.

  • Analysis of variance in spectroscopic imaging data from human tissues.

    Science.gov (United States)

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  • Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    Science.gov (United States)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  • Automated Extraction of Archaeological Traces by a Modified Variance Analysis

    Directory of Open Access Journals (Sweden)

    Tiziana D'Orazio

    2015-03-01

    Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.

    1. Analysis of Variance in the Modern Design of Experiments

      Science.gov (United States)

      Deloach, Richard

      2010-01-01

      This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

    2. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

      OpenAIRE

      RANJU KANWAR; SAMEKSHA BHASKAR

      2013-01-01

      In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...

    3. Recombining binomial tree for constant elasticity of variance process

      OpenAIRE

      Hi Jun Choe; Jeong Ho Chu; So Jeong Shin

      2014-01-01

      The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...

    4. Variance optimal sampling based estimation of subset sums

      CERN Document Server

      Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

      2008-01-01

      From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

    5. Dynamic Programming Using Polar Variance for Image Segmentation.

      Science.gov (United States)

      Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J

      2016-10-06

      When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.

    6. Relationship between Allan variances and Kalman Filter parameters

      Science.gov (United States)

      Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

      1984-01-01

      A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

    7. Measuring primordial non-gaussianity without cosmic variance

      CERN Document Server

      Seljak, Uros

      2008-01-01

      Non-gaussianity in the initial conditions of the universe is one of the most powerful mechanisms to discriminate among the competing theories of the early universe. Measurements using bispectrum of cosmic microwave background anisotropies are limited by the cosmic variance, i.e. available number of modes. Recent work has emphasized the possibility to probe non-gaussianity of local type using the scale dependence of large scale bias from highly biased tracers of large scale structure. However, this power spectrum method is also limited by cosmic variance, finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. Here we propose an alternative method that solves both of these problems. It is based on the idea that on large scales halos are biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer one eliminates the cosmic variance error, wh...

    8. Genetic variance of tolerance and the toxicant threshold model.

      Science.gov (United States)

      Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

      2012-04-01

      A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

    9. Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse

      Institute of Scientific and Technical Information of China (English)

      Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou

      2016-01-01

      In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.

    10. Variance in brain volume with advancing age: implications for defining the limits of normality.

      Directory of Open Access Journals (Sweden)

      David Alexander Dickie

      Full Text Available Statistical models of normal ageing brain tissue volumes may support earlier diagnosis of increasingly common, yet still fatal, neurodegenerative diseases. For example, the statistically defined distribution of normal ageing brain tissue volumes may be used as a reference to assess patient volumes. To date, such models were often derived from mean values which were assumed to represent the distributions and boundaries, i.e. percentile ranks, of brain tissue volume. Since it was previously unknown, the objective of the present study was to determine if this assumption was robust, i.e. whether regression models derived from mean values accurately represented the distributions and boundaries of brain tissue volume at older ages.We acquired T1-w magnetic resonance (MR brain images of 227 normal and 219 Alzheimer's disease (AD subjects (aged 55-89 years from publicly available databanks. Using nonlinear regression within both samples, we compared mean and percentile rank estimates of whole brain tissue volume by age.In both the normal and AD sample, mean regression estimates of brain tissue volume often did not accurately represent percentile rank estimates (errors=-74% to 75%. In the normal sample, mean estimates generally underestimated differences in brain volume at percentile ranks below the mean. Conversely, in the AD sample, mean estimates generally underestimated differences in brain volume at percentile ranks above the mean. Differences between ages at the 5(th percentile rank of normal subjects were ~39% greater than mean differences in the AD subjects.While more data are required to make true population inferences, our results indicate that mean regression estimates may not accurately represent the distributions of ageing brain tissue volumes. This suggests that percentile rank estimates will be required to robustly define the limits of brain tissue volume in normal ageing and neurodegenerative disease.

    11. A proxy for variance in dense matching over homogeneous terrain

      Science.gov (United States)

      Altena, Bas; Cockx, Liesbet; Goedemé, Toon

      2014-05-01

      Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

    12. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

      DEFF Research Database (Denmark)

      Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker

      2007-01-01

      quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...

    13. Female Scarcity Reduces Women's Marital Ages and Increases Variance in Men's Marital Ages

      Directory of Open Access Journals (Sweden)

      Daniel J. Kruger

      2010-07-01

      Full Text Available When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

    14. Variance as a Leading Indicator of Regime Shift in Ecosystem Services

      Directory of Open Access Journals (Sweden)

      William A. Brock

      2006-12-01

      Full Text Available Many environmental conflicts involve pollutants such as greenhouse gas emissions that are dispersed through space and cause losses of ecosystem services. As pollutant emissions rise in one place, a spatial cascade of declining ecosystem services can spread across a larger landscape because of the dispersion of the pollutant. This paper considers the problem of anticipating such spatial regime shifts by monitoring time series of the pollutant or associated ecosystem services. Using such data, it is possible to construct indicators that rise sharply in advance of regime shifts. Specifically, the maximum eigenvalue of the variance-covariance matrix of the multivariate time series of pollutants and ecosystem services rises prior to the regime shift. No specific knowledge of the mechanisms underlying the regime shift is needed to construct the indicator. Such leading indicators of regime shifts could provide useful signals to management agencies or to investors in ecosystem service markets.

    15. Regression between earthquake magnitudes having errors with known variances

      Science.gov (United States)

      Pujol, Jose

      2016-07-01

      Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

    16. The Reduction of Advanced Military Aircraft Noise

      Science.gov (United States)

      2011-12-01

      enterline s ingularity. A non -matching bl ock i nterface c ondition i s de veloped t o a llow t he grids t o be g reatly r efined a round t he c hevrons...T his i s ne eded t o t rigger t he uns teadiness of t he j et flow. T he gr ids a re r efined significantly around the jet potential core. The...dvanced C FD t echnologies a nd t he a coustic a nalogy. T he i mmersed boundary method with local grid r efinement i s used to avoid the di fficulty in

    17. Two-dimensional finite-element temperature variance analysis

      Science.gov (United States)

      Heuser, J. S.

      1972-01-01

      The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

    18. Risk Management - Variance Minimization or Lower Tail Outcome Elimination

      DEFF Research Database (Denmark)

      Aabo, Tom

      2002-01-01

      on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...

    19. Variance-optimal hedging for processes with stationary independent increments

      DEFF Research Database (Denmark)

      Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

      We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...... show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example...

    20. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

      DEFF Research Database (Denmark)

      Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

      2007-01-01

      This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...

    1. Local orbitals by minimizing powers of the orbital variance

      DEFF Research Database (Denmark)

      Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;

      2011-01-01

      It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...

    2. A guide to SPSS for analysis of variance

      CERN Document Server

      Levine, Gustav

      2013-01-01

      This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

    3. Computing the Expected Value and Variance of Geometric Measures

      DEFF Research Database (Denmark)

      Staals, Frank; Tsirogiannis, Constantinos

      2017-01-01

      points in P. This problem is a crucial part of modern ecological analyses; each point in P represents a species in d-dimensional trait space, and the goal is to compute the statistics of a geometric measure on this trait space, when subsets of species are selected under random processes. We present...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...

    4. Critical points of multidimensional random Fourier series: Variance estimates

      Science.gov (United States)

      Nicolaescu, Liviu I.

      2016-08-01

      We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.

    5. Multivariate variance targeting in the BEKK-GARCH model

      DEFF Research Database (Denmark)

      Pedersen, Rasmus S.; Rahbæk, Anders

      2014-01-01

      This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...... to these two steps. Strong consis-tency is established under weak moment conditions, while sixth-order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are necessary...

    6. Stable limits for sums of dependent infinite variance random variables

      DEFF Research Database (Denmark)

      Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

      2011-01-01

      The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...

    7. Generalized Minimum Variance Control for MDOF Structures under Earthquake Excitation

      Directory of Open Access Journals (Sweden)

      Lakhdar Guenfaf

      2016-01-01

      Full Text Available Control of a multi-degree-of-freedom structural system under earthquake excitation is investigated in this paper. The control approach based on the Generalized Minimum Variance (GMV algorithm is developed and presented. Our approach is a generalization to multivariable systems of the GMV strategy designed initially for single-input-single-output (SISO systems. Kanai-Tajimi and Clough-Penzien models are used to generate the seismic excitations. Those models are calculated using the specific soil parameters. Simulation tests using a 3DOF structure are performed and show the effectiveness of the control method.

    8. A Mean-Variance Portfolio Optimal Under Utility Pricing

      Directory of Open Access Journals (Sweden)

      Hürlimann Werner

      2006-01-01

      Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.

    9. Infinite Variance in Fermion Quantum Monte Carlo Calculations

      CERN Document Server

      Shi, Hao

      2015-01-01

      For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...

    10. Deterministic mean-variance-optimal consumption and investment

      DEFF Research Database (Denmark)

      Christiansen, Marcus; Steffensen, Mogens

      2013-01-01

      In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

    11. Replica approach to mean-variance portfolio optimization

      Science.gov (United States)

      Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

      2016-12-01

      We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

    12. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

      Energy Technology Data Exchange (ETDEWEB)

      Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)

      2010-05-15

      The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.

    13. Cosmic variance in the nanohertz gravitational wave background

      CERN Document Server

      Roebber, Elinore; Holz, Daniel; Warren, Michael

      2015-01-01

      We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes to estimate the formation rates of supermassive black hole binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive ($\\lesssim10\\%$) to cosmological parameters, with only slight variation between WMAP5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of $\\sim 2$. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly-generated populations of supermassive black holes, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of $\\sim 10$ near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty ...

    14. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

      Science.gov (United States)

      Hamilton, Travis; Dade Lunsford, L

      2016-12-01

      OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

    15. Cosmic variance of the galaxy cluster weak lensing signal

      CERN Document Server

      Gruen, D; Becker, M R; Friedrich, O; Mana, A

      2015-01-01

      Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_200m=10^14...10^15 h^-1 M_sol, z=0.25...0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ~20 per cent uncertainty from cosmic variance alone at M_200m=10^15 h^-1 M_sol and z=0.25), but significant also...

    16. Facial Feature Extraction Method Based on Coefficients of Variances

      Institute of Scientific and Technical Information of China (English)

      Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

      2007-01-01

      Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

    17. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

      Directory of Open Access Journals (Sweden)

      RANJU KANWAR

      2013-04-01

      Full Text Available In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through this work, it is investigated that for longer transmission distance, 40-Gb/s systems are more sensitive to nonlinear phase noise as compared to 50-Gb/s systems. Also, when transmitting the data through the fiber optic link, bit errors are produced due to various effects such as noise from optical amplifiers and nonlinearity occurring in fiber. On the basis of the simulation results , we have compared the bit error rate based on 8-PSK with theoretical results, and result shows that in real time approach, the bit error rate is high for the same signal to noise ratio. MATLAB software is used to validate the analytical expressions for the variance of nonlinear phase noise.

    18. Hidden temporal order unveiled in stock market volatility variance

      Directory of Open Access Journals (Sweden)

      Y. Shapira

      2011-06-01

      Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

    19. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

      Energy Technology Data Exchange (ETDEWEB)

      Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

      2016-04-08

      We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

    20. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

      Directory of Open Access Journals (Sweden)

      Ling Huang

      2017-02-01

      Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

    1. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

      Science.gov (United States)

      Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

      2017-02-27

      Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

    2. Estimation of measurement variance in the context of environment statistics

      Science.gov (United States)

      Maiti, Pulakesh

      2015-02-01

      The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

    3. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

      Science.gov (United States)

      Consonni, Guido; Paroli, Roberta

      2016-10-04

      In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

    4. INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND

      Energy Technology Data Exchange (ETDEWEB)

      TenBarge, J. M.; Klein, K. G.; Howes, G. G. [Department of Physics and Astronomy, University of Iowa, Iowa City, IA (United States); Podesta, J. J., E-mail: jason-tenbarge@uiowa.edu [Space Science Institute, Boulder, CO (United States)

      2012-07-10

      The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.

    5. Estimating discharge measurement uncertainty using the interpolated variance estimator

      Science.gov (United States)

      Cohn, T.; Kiang, J.; Mason, R.

      2012-01-01

      Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

    6. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

      Directory of Open Access Journals (Sweden)

      López-Herrera Francisco

      2014-01-01

      Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

    7. Mean and variance of coincidence counting with deadtime

      CERN Document Server

      Yu, D F

      2002-01-01

      We analyze the first and second moments of the coincidence-counting process for a system affected by paralyzable (extendable) deadtime with (possibly unequal) deadtimes in each singles channel. We consider both 'accidental' and 'genuine' coincidences, and derive exact analytical expressions for the first and second moments of the number of recorded coincidence events under various scenarios. The results include an exact form for the coincidence rate under the combined effects of decay, background, and deadtime. The analysis confirms that coincidence counts are not exactly Poisson, but suggests that the Poisson statistical model that is used for positron emission tomography image reconstruction is a reasonable approximation since the mean and variance are nearly equal.

    8. From Means and Variances to Persons and Patterns

      Directory of Open Access Journals (Sweden)

      James W Grice

      2015-07-01

      Full Text Available A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation.

    9. MARKOV-MODULATED MEAN-VARIANCE PROBLEM FOR AN INSURER

      Institute of Scientific and Technical Information of China (English)

      Wang Wei; Bi Junna

      2011-01-01

      In this paper, we consider an insurance company which has the option of investing in a risky asset and a risk-free asset, whose price parameters are driven by a finite state Markov chain. The risk process of the insurance company is modeled as a diffusion process whose diffusion and drift parameters switch over time according to the same Markov chain. We study the Markov-modulated mean-variance problem for the insurer and derive explicitly the closed form of the efficient strategy and efficient frontier. In the case of no regime switching, we can see that the efficient frontier in our paper coincides with that of [10] when there is no pure jump.

    10. Diffusion-Based Trajectory Observers with Variance Constraints

      DEFF Research Database (Denmark)

      Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo

      Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... and velocity measurements from a DVL. The observers are conceptually simple and can easily deal with the problems brought about by the occurrence of asynchronous measurements and dropouts. In its original formulation, the trajectory observers depend on a user-defined constant gain that controls the level...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...

    11. Batch variation between branchial cell cultures: An analysis of variance

      DEFF Research Database (Denmark)

      Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

      2003-01-01

      We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

    12. Correct use of repeated measures analysis of variance.

      Science.gov (United States)

      Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

      2009-02-01

      In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

    13. Analysis of variance of an underdetermined geodetic displacement problem

      Energy Technology Data Exchange (ETDEWEB)

      Darby, D.

      1982-06-01

      It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

    14. Variance of indoor radon concentration: Major influencing factors.

      Science.gov (United States)

      Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

      2016-01-15

      Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

    15. Hodological resonance, hodological variance, psychosis and schizophrenia: A hypothetical model

      Directory of Open Access Journals (Sweden)

      Paul Brian eLawrie Birkett

      2011-07-01

      Full Text Available Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network ("hodological resonance" becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia.

    16. Understanding the influence of watershed storage caused by human interferences on ET variance

      Science.gov (United States)

      Zeng, R.; Cai, X.

      2014-12-01

      Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.

    17. The pricing of long and short run variance and correlation risk in stock returns

      NARCIS (Netherlands)

      Cosemans, M.

      2011-01-01

      This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

    18. Estimation of genetic variation in residual variance in female and male broiler chickens

      NARCIS (Netherlands)

      Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.

      2009-01-01

      In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va

    19. Teeth size reduction in the prehistoric populations in Serbia

      Directory of Open Access Journals (Sweden)

      Pajević Tina

      2012-01-01

      Full Text Available Introduction. Anthropological studies show craniofacial changes with a reduction in teeth size during evolution of the human population. Objective. The objective was to measure and compare the sizes of teeth in the population of the Mesolithic-Neolithic sites in the Iron Gate Gorge and the population from the Early Bronze Age site of Mokrin. Methods. The study included teeth without advanced wear near the pulp. The material was divided according to the site of the skeletal population in two groups. Group 1 comprised 107 teeth from the Mesolithic-Neolithic sites Lepenski Vir and Vlasac. Group 2 included 158 teeth from the Mokrin graveyard dated in the Early Bronze Age. The mesio-distal diameter was measured in all teeth, while the vestibulo-oral diameter was measured in the molars only. Using the two-factor analysis of variance, the influence of sex, site and their interaction on the size of the teeth were investigated. Results. The vestibulo-oral diameter of the upper third molar was significantly higher in males compared to females. The comparison between the groups showed that the vestibulooral diameter of the lower first molar was significantly higher in group 1. Conclusion. The present difference in teeth size indicates the existence of reduction during the prehistoric times. However, the time period between the populations studied is probably too short to be manifested on a large number of teeth.

    20. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

      Science.gov (United States)

      Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

      2016-01-01

      The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

    1. Estimation models of variance components for farrowing interval in swine

      Directory of Open Access Journals (Sweden)

      Aderbal Cavalcante Neto

      2009-02-01

      Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

    2. Variance Swaps in BM&F: Pricing and Viability of Hedge

      Directory of Open Access Journals (Sweden)

      Richard John Brostowicz Junior

      2010-07-01

      Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

    3. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

      Science.gov (United States)

      Asanuma, Jun

      Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original

    4. Identifiability of Gaussian Structural Equation Models with Same Error Variances

      CERN Document Server

      Peters, Jonas

      2012-01-01

      We consider structural equation models (SEMs) in which variables can be written as a function of their parents and noise terms (the latter are assumed to be jointly independent). Corresponding to each SEM, there is a directed acyclic graph (DAG) G_0 describing the relationships between the variables. In Gaussian SEMs with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes (assuming faithfulness). It has been shown, however, that this constitutes an exceptional case. In the case of linear functions and non-Gaussian noise, the DAG becomes identifiable. Apart from few exceptions the same is true for non-linear functions and arbitrarily distributed additive noise. In this work, we prove identifiability for a third modification: if we require all noise variables to have the same variances, again, the DAG can be recovered from the joint Gaussian distribution. Our result can be applied to the problem of causal inference. If the data follow a Gaussian SEM w...

    5. Cosmic variance and the measurement of the local Hubble parameter.

      Science.gov (United States)

      Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel

      2013-06-14

      There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.

    6. Designing electricity generation portfolios using the mean-variance approach

      Directory of Open Access Journals (Sweden)

      Jorge Cunha

      2014-06-01

      Full Text Available The use of the mean-variance approach (MVA is well demonstrated in the financial literature for the optimal design of financial assets portfolios. The electricity sector portfolios are also guided by similar objectives, namely maximizing return and minimizing risk. As such, this paper proposes two possible MVA for the design of optimal renewable electricity production portfolios. The first approach is directed to portfolio output maximization and the second one is directed to portfolio cost optimization. The models implementation was achieved from data obtained for each quarter of an hour for a time period close to four years for the Portuguese electricity system. A set of renewable energy sources (RES portfolios was obtained, mixing three RES technologies, namely hydro power, wind power and photovoltaic. This allowed to recognize the seasonality of the resources demonstrating that hydro power output is positively correlated with wind and that photovoltaic is negatively correlated with both hydro and wind. The results showed that for both models the less risky solutions are characterised by a mix of RES technologies, taking advantage of the diversification benefits. As for the highest return solutions, as expected those were the ones with higher risk but the portfolio composition largely depends on the assumed costs of each technology.

    7. Cosmic variance in [O/Fe] in the Galactic disk

      CERN Document Server

      de Lis, S Bertran; Majewski, S R; Schiavon, R P; Holtzman, J A; Shetrone, M; Carrera, R; Pérez, A E García; Mészáros, Sz; Frinchaboy, P M; Hearty, F R; Nidever, D L; Zasowski, G; Ge, J

      2016-01-01

      We examine the distribution of the [O/Fe] abundance ratio in stars across the Galactic disk using H-band spectra from the Apache Point Galactic Evolution Experiment (APOGEE). We minimized systematic errors by considering groups of stars with similar atmospheric parameters. The APOGEE measurements in the Sloan Digital Sky Survey Data Release 12 reveal that the square root of the star-to-star cosmic variance in oxygen at a given metallicity is about 0.03-0.04 dex in both the thin and thick disk. This is about twice as high as the spread found for solar twins in the immediate solar neighborhood and is probably caused by the wider range of galactocentric distances spanned by APOGEE stars. We quantified measurement uncertainties by examining the spread among stars with the same parameters in clusters; these errors are a function of effective temperature and metallicity, ranging between 0.005 dex at 4000 K and solar metallicity, to about 0.03 dex at 4500 K and [Fe/H]= -0.6. We argue that measuring the spread in [O/...

    8. Cosmic variance in [O/Fe] in the Galactic disk

      Science.gov (United States)

      Bertran de Lis, S.; Allende Prieto, C.; Majewski, S. R.; Schiavon, R. P.; Holtzman, J. A.; Shetrone, M.; Carrera, R.; García Pérez, A. E.; Mészáros, Sz.; Frinchaboy, P. M.; Hearty, F. R.; Nidever, D. L.; Zasowski, G.; Ge, J.

      2016-05-01

      We examine the distribution of the [O/Fe] abundance ratio in stars across the Galactic disk using H-band spectra from the Apache Point Galactic Evolution Experiment (APOGEE). We minimize systematic errors by considering groups of stars with similar atmospheric parameters. The APOGEE measurements in the Sloan Digital Sky Survey data release 12 reveal that the square root of the star-to-star cosmic variance in the oxygen-to-iron ratio at a given metallicity is about 0.03-0.04 dex in both the thin and thick disk. This is about twice as high as the spread found for solar twins in the immediate solar neighborhood and the difference is probably associated to the wider range of galactocentric distances spanned by APOGEE stars. We quantify the uncertainties by examining the spread among stars with the same parameters in clusters; these errors are a function of effective temperature and metallicity, ranging between 0.005 dex at 4000 K and solar metallicity, to about 0.03 dex at 4500 K and [Fe/H] ≃ -0.6. We argue that measuring the spread in [O/Fe] and other abundance ratios provides strong constraints for models of Galactic chemical evolution.

    9. Analysis of variance (ANOVA) models in lower extremity wounds.

      Science.gov (United States)

      Reed, James F

      2003-06-01

      Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

    10. On Eliminating The Scrambling Variance In Scrambled Response Models

      Directory of Open Access Journals (Sweden)

      Zawar Hussain

      2012-06-01

      Full Text Available To circumvent the response bias in sensitive surveys randomized response models are being used. To add into it we propose an improved response model utilizing both the additive and multiplicative scrambling method. The proposed model provides greater flexibility in terms of fixing the constantKdepending upon the guessed distribution of sensitive variable and nature of the population. The proposed model yields an unbiased estimator and is anticipated as more protective against the privacy of the respondents. The relative efficiency comparison of the proposed estimator is made relative to Hussain and Shabbir (2007 RRM. Furthermore, the proposed model itself is improved by taking the two responses from each respondent and suggesting a weighted estimator yielding an unbiased estimator having the minimum possible sampling variance. The suggested weighted estimator is unconditionally more efficient than all of the suggested estimators until now. Future research may be focused on privacy protection provided by the scrambling models. More scrambling models may be identified and improved by taking the two responses from each respondent in such a way that the scrambling effect is balanced out.

    11. Cosmological N-body simulations with suppressed variance

      Science.gov (United States)

      Angulo, Raul E.; Pontzen, Andrew

      2016-10-01

      We present and test a method that dramatically reduces variance arising from the sparse sampling of wavemodes in cosmological simulations. The method uses two simulations which are fixed (the initial Fourier mode amplitudes are fixed to the ensemble average power spectrum) and paired (with initial modes exactly out of phase). We measure the power spectrum, monopole and quadrupole redshift-space correlation functions, halo mass function and reduced bispectrum at z = 1. By these measures, predictions from a fixed pair can be as precise on non-linear scales as an average over 50 traditional simulations. The fixing procedure introduces a non-Gaussian correction to the initial conditions; we give an analytic argument showing why the simulations are still able to predict the mean properties of the Gaussian ensemble. We anticipate that the method will drive down the computational time requirements for accurate large-scale explorations of galaxy bias and clustering statistics, and facilitating the use of numerical simulations in cosmological data interpretation.

    12. Analysis of variance in neuroreceptor ligand imaging studies.

      Directory of Open Access Journals (Sweden)

      Ji Hyun Ko

      Full Text Available Radioligand positron emission tomography (PET with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA, and examine its feasibility using simulated [(11C]raclopride PET data. We also re-visit data from our previously published [(11C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

    13. Chromatic visualization of reflectivity variance within hybridized directional OCT images

      Science.gov (United States)

      Makhijani, Vikram S.; Roorda, Austin; Bayabo, Jan Kristine; Tong, Kevin K.; Rivera-Carpio, Carlos A.; Lujan, Brandon J.

      2013-03-01

      This study presents a new method of visualizing hybridized images of retinal spectral domain optical coherence tomography (SDOCT) data comprised of varied directional reflectivity. Due to the varying reflectivity of certain retinal structures relative to angle of incident light, SDOCT images obtained with differing entry positions result in nonequivalent images of corresponding cellular and extracellular structures, especially within layers containing photoreceptor components. Harnessing this property, cross-sectional pathologic and non-pathologic macular images were obtained from multiple pupil entry positions using commercially-available OCT systems, and custom segmentation, alignment, and hybridization algorithms were developed to chromatically visualize the composite variance of reflectivity effects. In these images, strong relative reflectivity from any given direction visualizes as relative intensity of its corresponding color channel. Evident in non-pathologic images was marked enhancement of Henle's fiber layer (HFL) visualization and varying reflectivity patterns of the inner limiting membrane (ILM) and photoreceptor inner/outer segment junctions (IS/OS). Pathologic images displayed similar and additional patterns. Such visualization may allow a more intuitive understanding of structural and physiologic processes in retinal pathologies.

    14. Sparse recovery with unknown variance: a LASSO-type approach

      CERN Document Server

      Chretien, Stephane

      2011-01-01

      We address the issue of estimating the regression vector $\\beta$ and the variance $\\sg^{2}$ in the generic s-sparse linear model $y = X\\beta+z$, with $\\beta\\in\\R^{p}$, $y\\in\\R^{n}$, $z\\sim\\mathcal N(0,\\sg^2 I)$ and $p> n$. We propose a new LASSO-type method that jointly estimates $\\beta$, $\\sg^{2}$ and the relaxation parameter $\\lb$ by imposing an explicit trade-off constraint between the $\\log$-likelihood and $\\ell_1$-penalization terms. We prove that exact recovery of the support and sign pattern of $\\beta$ holds with probability at least $1-O(p^{-\\alpha})$. Our assumptions, parametrized by $\\alpha$, are similar to the ones proposed in \\cite{CandesPlan:AnnStat09} for $\\sg^{2}$ known. The proof relies on a tail decoupling argument with explicit constants and a recent version of the Non-Commutative Bernstein inequality \\cite{Tropp:ArXiv10}. Our result is then derived from the optimality conditions for the estimators of $\\beta$ and $\\lb$. Finally, a thorough analysis of the standard LASSO estimator as a functi...

    15. Analysis of variance in neuroreceptor ligand imaging studies.

      Science.gov (United States)

      Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P

      2011-01-01

      Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

    16. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

      Directory of Open Access Journals (Sweden)

      Alexander Eptas

      2010-12-01

      Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

    17. On the stability and spatiotemporal variance distribution of salinity in the upper ocean

      Science.gov (United States)

      O'Kane, Terence J.; Monselesan, Didier P.; Maes, Christophe

      2016-06-01

      Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.

    18. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

      Energy Technology Data Exchange (ETDEWEB)

      Westner, Guenther; Madlener, Reinhard [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany)

      2010-12-15

      The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

    19. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

      Energy Technology Data Exchange (ETDEWEB)

      Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

      2012-01-01

      Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

    20. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

      Energy Technology Data Exchange (ETDEWEB)

      Westner, Guenther, E-mail: guenther.westner@eon-energie.co [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany); Madlener, Reinhard, E-mail: rmadlener@eonerc.rwth-aachen.d [Institute for Future Energy Consumer Needs and Behavior (FCN), Faculty of Business and Economics/E.ON Energy Research Center, RWTH Aachen University, Mathieustrasse 6, 52074 Aachen (Germany)

      2010-12-15

      The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: {yields}Preconditions for CHP investments differ significantly between the EU member states. {yields}Regional diversification of CHP investments can reduce the total portfolio risk. {yields}Risk reduction depends on the chosen CHP technology.

    1. The pricing of long and short run variance and correlation risk in stock returns

      OpenAIRE

      Cosemans, M.

      2011-01-01

      This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk is priced in the cross-section because shocks to average stock volatility and correlation are priced. Both long and short run volatility and correlation factors have explanatory power for returns....

    2. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

      Institute of Scientific and Technical Information of China (English)

      Li Shu; Zhuo Jiashou; Ren Qingwen

      2000-01-01

      In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

    3. How the Weak Variance of Momentum Can Turn Out to be Negative

      OpenAIRE

      2015-01-01

      Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wign...

    4. Inferring changes in ENSO amplitude from the variance of proxy records

      OpenAIRE

      Russon, Tom; Tudhope, Alexander; Collins, Mat; Hegerl, Gabi

      2015-01-01

      One common approach to investigating past changes in ENSO amplitude is through quantifying the variance of ENSO-influenced proxy records. However, a component of the variance of all such proxies will reflect influences that are unrelated to the instrumental climatic indices from which modern ENSO amplitudes are defined. The unrelated component of proxy variance introduces a fundamental source of uncertainty to all such constraints on past ENSO amplitudes. Based on a simple parametric approach...

    5. The Impact of Jump Distributions on the Implied Volatility of Variance

      DEFF Research Database (Denmark)

      Nicolato, Elisa; Pedersen, David Sloth; Pisani, Camilla

      2016-01-01

      of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

    6. The Impact of Jump Distributions on the Implied Volatility of Variance

      DEFF Research Database (Denmark)

      Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

      2017-01-01

      of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

    7. Second order pseudo-maximum likelihood estimation and conditional variance misspecification

      OpenAIRE

      Lejeune, Bernard

      1997-01-01

      In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...

    8. Measurement of Allan variance and phase noise at fractions of a millihertz

      Science.gov (United States)

      Conroy, Bruce L.; Le, Duc

      1990-01-01

      Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.

    9. Effects of noise variance model on optimal feedback design and actuator placement

      Science.gov (United States)

      Ruan, Mifang; Choudhury, Ajit K.

      1994-01-01

      In optimal placement of actuators for stochastic systems, it is commonly assumed that the actuator noise variances are not related to the feedback matrix and the actuator locations. In this paper, we will discuss the limitation of that assumption and develop a more practical noise variance model. Various properties associated with optimal actuator placement under the assumption of this noise variance model are discovered through the analytical study of a second order system.

    10. The Multi-allelic Genetic Architecture of a Variance-Heterogeneity Locus for Molybdenum Concentration in Leaves Acts as a Source of Unexplained Additive Genetic Variance.

      Directory of Open Access Journals (Sweden)

      Simon K G Forsberg

      2015-11-01

      Full Text Available Genome-wide association (GWA analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1, and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975 as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.

    11. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

      DEFF Research Database (Denmark)

      Greve, Douglas N; Svarer, Claus; Fisher, Patrick M;

      2014-01-01

      -based smoothing resulted in dramatically less bias and the least variance of the methods tested for smoothing levels 5mm and higher. When used in combination with PVC, surface-based smoothing minimized the bias without significantly increasing the variance. Surface-based smoothing resulted in 2-4 times less...... intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered statistical tests. Surface-based smoothing has less bias and variance because it respects cortical geometry by smoothing the PET data only...

    12. Variance-in-Mean Effects of the Long Forward-Rate Slope

      DEFF Research Database (Denmark)

      Christiansen, Charlotte

      2005-01-01

      This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

    13. An evaluation of how downscaled climate data represents historical precipitation characteristics beyond the means and variances

      Science.gov (United States)

      Kusangaya, Samuel; Toucher, Michele L. Warburton; van Garderen, Emma Archer; Jewitt, Graham P. W.

      2016-09-01

      Precipitation is the main driver of the hydrological cycle. For climate change impact analysis, use of downscaled precipitation, amongst other factors, determines accuracy of modelled runoff. Precipitation is, however, considerably more difficult to model than temperature, largely due to its high spatial and temporal variability and its nonlinear nature. Due to such qualities of precipitation, a key challenge for water resources management is thus how to incorporate potentially significant but highly uncertain precipitation characteristics when modelling potential changes in climate for water resources management in order to support local management decisions. Research undertaken here was aimed at evaluating how downscaled climate data represented the underlying historical precipitation characteristics beyond the means and variances. Using the uMngeni Catchment in KwaZulu-Natal, South Africa as a case study, the occurrence of rainfall, rainfall threshold events and wet dry sequence was analysed for current climate (1961-1999). The number of rain days with daily rainfall > 1 mm, > 5 mm, > 10 mm, > 20 mm and > 40 mm for each of the 10 selected climate models was, compared to the number of rain days at 15 rain stations. Results from graphical and statistical analysis indicated that on a monthly basis rain days are over estimated for all climate models. Seasonally, the number of rain days were overestimated in autumn and winter and underestimated in summer and spring. The overall conclusion was that despite the advancement in downscaling and the improved spatial scale for a better representation of the climate variables, such as rainfall for use in hydrological impact studies, downscaled rainfall data still does not simulate well some important rainfall characteristics, such as number of rain days and wet-dry sequences. This is particularly critical, since, whilst for climatologists, means and variances might be simulated well in downscaled GCMs, for hydrologists

    14. Exceptional Reductions

      CERN Document Server

      Marrani, Alessio; Riccioni, Fabio

      2011-01-01

      Starting from basic identities of the group E8, we perform progressive reductions, namely decompositions with respect to the maximal and symmetric embeddings of E7xSU(2) and then of E6xU(1). This procedure provides a systematic approach to the basic identities involving invariant primitive tensor structures of various irreprs. of finite-dimensional exceptional Lie groups. We derive novel identities for E7 and E6, highlighting the E8 origin of some well known ones. In order to elucidate the connections of this formalism to four-dimensional Maxwell-Einstein supergravity theories based on symmetric scalar manifolds (and related to irreducible Euclidean Jordan algebras, the unique exception being the triality-symmetric N = 2 stu model), we then derive a fundamental identity involving the unique rank-4 symmetric invariant tensor of the 0-brane charge symplectic irrepr. of U-duality groups, with potential applications in the quantization of the charge orbits of supergravity theories, as well as in the study of mult...

    15. Inferred changes in El Niño–Southern Oscillation variance over the past six centuries

      Directory of Open Access Journals (Sweden)

      S. McGregor

      2013-10-01

      Full Text Available It is vital to understand how the El Niño–Southern Oscillation (ENSO has responded to past changes in natural and anthropogenic forcings, in order to better understand and predict its response to future greenhouse warming. To date, however, the instrumental record is too brief to fully characterize natural ENSO variability, while large discrepancies exist amongst paleo-proxy reconstructions of ENSO. These paleo-proxy reconstructions have typically attempted to reconstruct ENSO's temporal evolution, rather than the variance of these temporal changes. Here a new approach is developed that synthesizes the variance changes from various proxy data sets to provide a unified and updated estimate of past ENSO variance. The method is tested using surrogate data from two coupled general circulation model (CGCM simulations. It is shown that in the presence of dating uncertainties, synthesizing variance information provides a more robust estimate of ENSO variance than synthesizing the raw data and then identifying its running variance. We also examine whether good temporal correspondence between proxy data and instrumental ENSO records implies a good representation of ENSO variance. In the climate modeling framework we show that a significant improvement in reconstructing ENSO variance changes is found when combining information from diverse ENSO-teleconnected source regions, rather than by relying on a single well-correlated location. This suggests that ENSO variance estimates derived from a single site should be viewed with caution. Finally, synthesizing existing ENSO reconstructions to arrive at a better estimate of past ENSO variance changes, we find robust evidence that the ENSO variance for any 30 yr period during the interval 1590–1880 was considerably lower than that observed during 1979–2009.

    16. Inferred changes in El Niño-Southern Oscillation variance over the past six centuries

      Directory of Open Access Journals (Sweden)

      S. McGregor

      2013-05-01

      Full Text Available It is vital to understand how the El Niño–Southern Oscillation (ENSO has responded to past changes in natural and anthropogenic forcings, in order to better understand and predict its response to future greenhouse warming. To date, however, the instrumental record is too brief to fully characterize natural ENSO variability, while large discrepancies exist amongst paleo-proxy reconstructions of ENSO. These paleo-proxy reconstructions have typically attempted to reconstruct the full temporal variability of ENSO, rather than focusing simply on its variance. Here a new approach is developed that synthesizes the information on common low frequency variance changes from various proxy datasets to obtain estimates of ENSO variance. The method is tested using surrogate data from two coupled general circulation model (CGCM simulations. It is shown that in the presence of dating uncertainties, synthesizing variance information provides a more robust estimate of ENSO variance than synthesizing the raw data than identifying its running variance. We also examine whether good temporal correspondence between proxy data and instrumental ENSO records implies a good representation of ENSO variance. A significant improvement in reconstructing ENSO variance changes is found when combining several proxies from diverse ENSO-teleconnected source regions, rather than by relying on a single well-correlated location, suggesting that ENSO variance estimates provided derived from a single site should be viewed with caution. Finally, identifying the common variance signal in a series of existing proxy based reconstructions of ENSO variability over the last 600 yr we find that the common ENSO variance over the period 1600–1900 was considerably lower than during 1979–2009.

    17. Dimension reduction in heterogeneous neural networks: Generalized Polynomial Chaos (gPC) and ANalysis-Of-VAriance (ANOVA)

      Science.gov (United States)

      Choi, M.; Bertalan, T.; Laing, C. R.; Kevrekidis, I. G.

      2016-09-01

      We propose, and illustrate via a neural network example, two different approaches to coarse-graining large heterogeneous networks. Both approaches are inspired from, and use tools developed in, methods for uncertainty quantification (UQ) in systems with multiple uncertain parameters - in our case, the parameters are heterogeneously distributed on the network nodes. The approach shows promise in accelerating large scale network simulations as well as coarse-grained fixed point, periodic solution computation and stability analysis. We also demonstrate that the approach can successfully deal with structural as well as intrinsic heterogeneities.

    18. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

      NARCIS (Netherlands)

      Mulder, H.A.; Hill, W.G.; Knol, E.F.

      2015-01-01

      There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of oth

    19. Robustness of Kriging when interpolating in random simulation with heterogeneous variances: some experiments

      NARCIS (Netherlands)

      Kleijnen, J.P.C.; Beers, van W.C.M.

      2005-01-01

      This paper investigates the use of Kriging in random simulation when the simulation output variances are not constant. Kriging gives a response surface or metamodel that can be used for interpolation. Because Ordinary Kriging assumes constant variances, this paper also applies Detrended Kriging to e

    20. A mean–variance objective for robust production optimization in uncertain geological scenarios

      DEFF Research Database (Denmark)

      Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne;

      2014-01-01

      optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

    1. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

      DEFF Research Database (Denmark)

      Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;

      2014-01-01

      Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...

    2. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

      DEFF Research Database (Denmark)

      Veraart, Almut

      This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...

    3. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

      NARCIS (Netherlands)

      Mulder, H.A.; Bijma, P.; Hill, W.G.

      2007-01-01

      There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework fo

    4. Comment on a Wilcox Test Statistic for Comparing Means When Variances Are Unequal.

      Science.gov (United States)

      Hsiung, Tung-Hsing; And Others

      1994-01-01

      The alternative proposed by Wilcox (1989) to the James second-order statistic for comparing population means when variances are heterogeneous can sometimes be invalid. The degree to which the procedure is invalid depends on differences in sample size, the expected values of the observations, and population variances. (SLD)

    5. A FORTRAN program for computing the exact variance of weighted kappa.

      Science.gov (United States)

      Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

      2005-10-01

      An algorithm and associated FORTRAN program are provided for the exact variance of weighted kappa. Program VARKAP provides the weighted kappa test statistic, the exact variance of weighted kappa, a Z score, one-sided lower- and upper-tail N(0,1) probability values, and the two-tail N(0,1) probability value.

    6. On the multiplicity of option prices under CEV with positive elasticity of variance

      NARCIS (Netherlands)

      Veestraeten, D.

      2017-01-01

      The discounted stock price under the Constant Elasticity of Variance model is not a martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the price for which put-call parity holds and the price that represents the lowest cost of replic

    7. On the multiplicity of option prices under CEV with positive elasticity of variance

      NARCIS (Netherlands)

      Veestraeten, D.

      2014-01-01

      The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

    8. Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.

      Science.gov (United States)

      Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B

      2015-02-01

      Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available.

    9. How the Weak Variance of Momentum Can Turn Out to be Negative

      CERN Document Server

      Feyereisen, M R

      2015-01-01

      Weak values are average quantities,therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from...

    10. Cost-effective Sulphur Emission Reduction under Uncertainty

      OpenAIRE

      Altman, A; Ruszczynski, A.

      1993-01-01

      The problem of reducing SO2 emissions in Europe is considered. The costs of reduction are assumed to be uncertain and are modeled by a set of possible scenarios. A mean-variance model of the problem is formulated and a specialized computational procedure developed. The approach is applied to the transboundary air pollution model with real-world data.

    11. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

      Science.gov (United States)

      Armstrong, R A; Slade, S V; Eperjesi, F

      2000-05-01

      This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

    12. Estimates of (co)variance components and genetic parameters for growth traits of Avikalin sheep.

      Science.gov (United States)

      Prince, Leslie Leo L; Gowane, Gopal R; Chopra, Ashish; Arora, Amrit L

      2010-08-01

      (Co)variance components and genetic parameters for various growth traits of Avikalin sheep maintained at Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, were estimated by Restricted Maximum Likelihood, fitting six animal models with various combinations of direct and maternal effects. Records of 3,840 animals descended from 257 sires and 1,194 dams were taken for this study over a period of 32 years (1977-2008). Direct heritability estimates (from best model as per likelihood ratio test) for weight at birth, weaning, 6 and 12 months of age, and average daily gain from birth to weaning, weaning to 6 months, and 6 to 12 months were 0.28 +/- 0.03, 0.20 +/- 0.03, 0.28 +/- 0.07, 0.15 +/- 0.04, 0.21 +/- 0.03, 0.16 and 0.03 +/- 0.03, respectively. Maternal heritability for traits declined as animal grows older and it was not at all evident at adult age and for post-weaning daily gain. Maternal permanent environmental effect (c(2)) declined significantly with advancement of age of animal. A small effect of c(2) on post-weaning weights was probably a carryover effect of pre-weaning maternal influence. A significant large negative genetic correlation was observed between direct and maternal genetic effects for all the traits, indicating antagonistic pleiotropy, which needs special care while formulating breeding plans. A fair rate of genetic progress seems possible in the flock by selection for all traits, but direct and maternal genetic correlation needs to be taken in to consideration.

    13. Meta-analysis with missing study-level sample variance data.

      Science.gov (United States)

      Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

      2016-07-30

      We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd.

    14. Numerical errors in the computation of subfilter scalar variance in large eddy simulations

      Science.gov (United States)

      Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.

      2009-05-01

      Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.

    15. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

      Science.gov (United States)

      Indarsih, Indrati, Ch. Rini

      2016-02-01

      In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

    16. Unknown parameter's variance-covariance propagation and calculation in generalized nonlinear least squares problem

      Institute of Scientific and Technical Information of China (English)

      TAO Hua-xue; GUO Jin-yun

      2005-01-01

      The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.

    17. SUBSPACE-BASED NOISE VARIANCE AND SNR ESTIMATION FOR MIMO OFDM SYSTEMS

      Institute of Scientific and Technical Information of China (English)

      2006-01-01

      This paper proposes a subspace-based noise variance and Signal-to-Noise Ratio (SNR) estimation algorithm for Multi-Input Multi-Output (MIMO) wireless Orthogonal Frequency Division Multiplexing (OFDM) systems. The special training sequences with the property of orthogonality and phase shift orthogonality are used in pilot tones to obtain the estimated channel correlation matrix. Partitioning the observation space into a delay subspace and a noise subspace, we achieve the measurement of noise variance and SNR.Simulation results show that the proposed estimator can obtain accurate and real-time measurements of the noise variance and SNR for various multipath fading channels, demonstrating its strong robustness against different channels.

    18. An advance triage system.

      Science.gov (United States)

      Cheung, W W H; Heeney, L; Pound, J L

      2002-01-01

      This paper describes the redesign of the triage process in an Emergency Department with the purpose of improving the patient flow and thus increasing patient satisfaction through the reduction of the overall length of stay. The process, Advance Triage, allows the triage nurse to initiate diagnostic protocols for frequently occurring medical problems based on physician-approved algorithms. With staff and physician involvement and medical specialist approval, nine Advance Triage algorithms were developed-abdominal pain, eye trauma, chest pain, gynaecological symptoms, substance abuse, orthopaedic trauma, minor trauma, paediatric fever and paediatric emergent. A comprehensive educational program was provided to the triage nurses and Advance Triage was initiated. A process was established at one year to evaluate the effectiveness of the Advance Triage System. The average length of stay was found to be 46 min less for all patients who were advance triaged with the greatest time-saving of 76 min for patients in the 'Urgent' category. The most significant saving was realized in the patient's length of stay (LOS) after the Emergency Physician assessed them because diagnostic results, available during the initial patient assessment, allowed treatment decisions to be made at that time. Advance Triage utilizes patient waiting time efficiently and increases the nurses' and physicians' job satisfaction.

    19. Advanced Hydrogen Turbine Development

      Energy Technology Data Exchange (ETDEWEB)

      Marra, John [Siemens Energy, Inc., Orlando, FL (United States)

      2015-09-30

      Under the sponsorship of the U.S. Department of Energy (DOE) National Energy Technology Laboratories, Siemens has completed the Advanced Hydrogen Turbine Development Program to develop an advanced gas turbine for incorporation into future coal-based Integrated Gasification Combined Cycle (IGCC) plants. All the scheduled DOE Milestones were completed and significant technical progress was made in the development of new technologies and concepts. Advanced computer simulations and modeling, as well as subscale, full scale laboratory, rig and engine testing were utilized to evaluate and select concepts for further development. Program Requirements of: A 3 to 5 percentage point improvement in overall plant combined cycle efficiency when compared to the reference baseline plant; 20 to 30 percent reduction in overall plant capital cost when compared to the reference baseline plant; and NOx emissions of 2 PPM out of the stack. were all met. The program was completed on schedule and within the allotted budget

    20. Structure analysis of interstellar clouds: II. Applying the Delta-variance method to interstellar turbulence

      CERN Document Server

      Ossenkopf, V; Stutzki, J

      2008-01-01

      The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...

    1. Variance estimation between different body measurements at the males population from Romanian Mioritic Shepherd Dog breed

      Directory of Open Access Journals (Sweden)

      Dorel Dronca

      2015-05-01

      Full Text Available Romanian Mioritic Shepherd Dog, was selected from a natural population breed in Carpathian Mountains. The aim of this paper was to estimate variance at 12 body measurements using 26 males from Romanian Mioritic Shepherd Dog breed. The animals were registered with the Romanian Mioritic Association Club from Romania. The statistical data showed that there is a large variance for body length and tail length, a middle variance for the croup width and thorax width and a small variance for height at withers, height at middle of back, height at croup, height at the base of the tail, depth of thorax, thoracic perimeter, elbow height and height of the hock. We recommend of breeders dogs from this breed to take account in genetic improvement programs, of values presented in this paper.

    2. Medical ultrasound imaging method combining minimum variance beamforming and general coherence factor

      Institute of Scientific and Technical Information of China (English)

      WU Wentao; PU Jie; LU Yi

      2012-01-01

      In medical ultrasound imaging field, in order to obtain high resolution and correct the phase errors induced by the velocity in-homogeneity of the tissue, a high-resolution medical ultrasound imaging method combining minimum variance beamforming and general coherence factor was presented. First, the data from the elements is delayed for focusing; then the multi-channel data is used for minimum variance beamforming; at the same time, the data is transformed from array space to beam space to calculate the general coherence factor; in the end, the general coherence factor is used to weight the results of minimum variance beamforming. The medical images are gotten by the imaging system. Experiments based on point object and anechoic cyst object are used to verify the proposed method. The results show the proposed method in the aspects of resolution, contrast and robustness is better than minimum variance beamforming and conventional beamforming.

    3. Population Bottlenecks Increase Additive Genetic Variance But Do Not Break a Selection Limit in Rainforest Drosophila

      DEFF Research Database (Denmark)

      van Heerwaarden, Belinda; Willi, Yvonne; Kristensen, Torsten N;

      2008-01-01

      According to neutral quantitative genetic theory, population bottlenecks are expected to decrease standing levels of additive genetic variance of quantitative traits. However, some empirical and theoretical results suggest that, if nonadditive genetic effects influence the trait, bottlenecks may ...

    4. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

      DEFF Research Database (Denmark)

      Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

      2011-01-01

      of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

    5. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

      Science.gov (United States)

      Weaver, Bruce; Black, Ryan A

      2015-06-01

      Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

    6. Advance care directives

      Science.gov (United States)

      ... advance directive; Do-not-resuscitate - advance directive; Durable power of attorney - advance care directive; POA - advance care directive; Health care agent - advance care directive; Health care proxy - ...

    7. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

      Science.gov (United States)

      He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

      2016-04-01

      Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

    8. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

      DEFF Research Database (Denmark)

      Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet;

      2005-01-01

      The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, s......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

    9. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

      Science.gov (United States)

      DeCarlo, Lawrence T

      2003-02-01

      The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

    10. Sex Modifies Genetic Effects on Residual Variance in Urinary Calcium Excretion in Rat (Rattus norvegicus)

      OpenAIRE

      Perry, Guy M. L.; Nehrke, Keith W.; Bushinsky, David A; Reid, Robert; Lewandowski, Krista L.; Hueber, Paul; Scheinman, Steven J.

      2012-01-01

      Conventional genetics assumes common variance among alleles or genetic groups. However, evidence from vertebrate and invertebrate models suggests that residual genotypic variance may itself be under partial genetic control. Such a phenomenon would have great significance: high-variability alleles might confound the detection of “classically” acting genes or scatter predicted evolutionary outcomes among unpredicted trajectories. Of the few works on this phenomenon, many implicate sex in some a...

    11. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

      Science.gov (United States)

      Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

      2012-01-01

      Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

    12. Inference of bioequivalence for log-normal distributed data with unspecified variances.

      Science.gov (United States)

      Xu, Siyan; Hua, Steven Y; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B

      2014-07-30

      Two drugs are bioequivalent if the ratio of a pharmacokinetic (PK) parameter of two products falls within equivalence margins. The distribution of PK parameters is often assumed to be log-normal, therefore bioequivalence (BE) is usually assessed on the difference of logarithmically transformed PK parameters (δ). In the presence of unspecified variances, test procedures such as two one-sided tests (TOST) use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit our knowledge on the extent that inference about BE is affected by the variability of PK parameters. In this paper, we propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for δ. Demonstrated with published real-life data, the proposed method not only produces results that are same as TOST and comparable with Bayesian method but also helps identify ranges of variances, which could make the determination of BE more achievable. Our findings manifest the advantages of the proposed method in making inference about the extent that BE is affected by the unspecified variances, which cannot be accomplished either by TOST or Bayesian method.

    13. Application of an area of review variance methodology to the San Juan Basin, New Mexico

      Energy Technology Data Exchange (ETDEWEB)

      Dunn-Norman, S.; Warner, D.L.; Koederitz, L.F.; Laudon, R.C.

      1995-12-01

      When the Underground Injection Control (UIC) Regulations were promulgated in 1980, existing Class II Injection wells operating at the time were excluded from Area of Review (AOR) requirements. EPA has expressed its intent to revise the regulations to include the requirement for AOR`s for such wells, but it is expected that oil and gas producing states will be allowed to adopt a variance strategy for these wells. An AOR variance methodology has been developed under sponsorship of the American Petroleum Institute. The general concept of the variance methodology is a systematic evaluation of basic variance criteria that were agreed to by a Federal Advisory Committee. These criteria include absence of USDWs, lack of positive flow potential from the petroleum reservoir into the overlying USDWs, mitigating geological factors, and other evidence. The AOR variance methodology has been applied to oilfields in the San Juan Basin, New Mexico. This paper details results of these analyses, particularly with respect to the opportunity for variance for injection fields in the San Juan Basin.

    14. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

      Science.gov (United States)

      Saviane, Chiara; Silver, R Angus

      2006-06-15

      Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

    15. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

      Science.gov (United States)

      Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

      2016-06-01

      The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

    16. Lymphedema Risk Reduction Practices

      Science.gov (United States)

      ... now! Position Paper: Lymphedema Risk Reduction Practices Category: Position Papers Tags: Risks Archives Treatment risk reduction garments surgery obesity infection blood pressure trauma morbid obesity body weight ...

    17. Advance payments

      CERN Multimedia

      Human Resources Division

      2003-01-01

      Administrative Circular N 8 makes provision for the granting of advance payments, repayable in several monthly instalments, by the Organization to the members of its personnel. Members of the personnel are reminded that these advances are only authorized in exceptional circumstances and at the discretion of the Director-General. In view of the current financial situation of the Organization, and in particular the loans it will have to incur, the Directorate has decided to restrict the granting of such advances to exceptional or unforeseen circumstances entailing heavy expenditure and more specifically those pertaining to social issues. Human Resources Division Tel. 73962

    18. ADVANCE PAYMENTS

      CERN Multimedia

      Human Resources Division

      2002-01-01

      Administrative Circular Nº 8 makes provision for the granting of advance payments, repayable in several monthly instalments, by the Organization to the members of its personnel. Members of the personnel are reminded that these advances are only authorized in exceptional circumstances and at the discretion of the Director-General. In view of the current financial situation of the Organization, and in particular the loans it will have to incur, the Directorate has decided to restrict the granting of such advances to exceptional or unforeseen circumstances entailing heavy expenditure and more specifically those pertaining to social issues. Human Resources Division Tel. 73962

    19. Genetic and phenotypic variance and covariance components for methane emission and postweaning traits in Angus cattle.

      Science.gov (United States)

      Donoghue, K A; Bird-Gardiner, T; Arthur, P F; Herd, R M; Hegarty, R F

      2016-04-01

      Ruminants contribute 80% of the global livestock greenhouse gas (GHG) emissions mainly through the production of methane, a byproduct of enteric microbial fermentation primarily in the rumen. Hence, reducing enteric methane production is essential in any GHG emissions reduction strategy in livestock. Data on 1,046 young bulls and heifers from 2 performance-recording research herds of Angus cattle were analyzed to provide genetic and phenotypic variance and covariance estimates for methane emissions and production traits and to examine the interrelationships among these traits. The cattle were fed a roughage diet at 1.2 times their estimated maintenance energy requirements and measured for methane production rate (MPR) in open circuit respiration chambers for 48 h. Traits studied included DMI during the methane measurement period, MPR, and methane yield (MY; MPR/DMI), with means of 6.1 kg/d (SD 1.3), 132 g/d (SD 25), and 22.0 g/kg (SD 2.3) DMI, respectively. Four forms of residual methane production (RMP), which is a measure of actual minus predicted MPR, were evaluated. For the first 3 forms, predicted MPR was calculated using published equations. For the fourth (RMP), predicted MPR was obtained by regression of MPR on DMI. Growth and body composition traits evaluated were birth weight (BWT), weaning weight (WWT), yearling weight (YWT), final weight (FWT), and ultrasound measures of eye muscle area, rump fat depth, rib fat depth, and intramuscular fat. Heritability estimates were moderate for MPR (0.27 [SE 0.07]), MY (0.22 [SE 0.06]), and the RMP traits (0.19 [SE 0.06] for each), indicating that genetic improvement to reduce methane emissions is possible. The RMP traits and MY were strongly genetically correlated with each other (0.99 ± 0.01). The genetic correlation of MPR with MY as well as with the RMP traits was moderate (0.32 to 0.63). The genetic correlation between MPR and the growth traits (except BWT) was strong (0.79 to 0.86). These results indicate that

    20. An efficiency comparison of control chart for monitoring process variance: Non-normality case

      Directory of Open Access Journals (Sweden)

      Sangkawanit, R.

      2005-11-01

      Full Text Available The purposes of this research are to investigate the relation between upper control limit and parameters of weighted moving variance linear weight control chart (WMVL, weighted moving variance: exponential weight control chart (WMVE , successive difference cumulative sum control chart (Cusum-SD and current sample mean cumulative sum control chart (Cusum-UM and to compare efficiencies of these control charts for monitoring increases in process variance, exponentially distributed data with unit variance and Student's t distributed data with variance 1.071429 (30 degrees of freedom as the in control process. Incontrol average run lengths (ARL0 of 200, 400 and 800 are considered. Out-of-control average run lengths (ARL1 obtained via simulation 10,000 times are used as a criteria.The main results are as follows: the upper control limit of WMVL has a negative relation with moving span while the upper control limit of WMVE has a negative relation with moving span and a positive relation with exponential weight. Both the upper control limits of Cusum-SD and Cusum-UM have a negative relation with reference value in which such relation looks like an exponential curve.The results of efficiency comparisons in case of exponentially distributed data for ARL0 of 200, 400 and 800 turned out to be quite similar. When standard deviation changes less than 50%, Cusum-SD control chart and Cusum-UM control chart have ARL1 less than those of WMVL control chart and WMVE control chart. However, when standard deviation changes more than 50%, WMVL control chart and WMVE control chart have ARL1 less than those of Cusum-SD control chart and Cusum-UM control chart. The results are different from the normally distributed data case, studied by Sparks in 2003. In case of Student's t distributed data for ARL0 of 200 and 400 when process variance shifts by a small amount (less than 50%, Cusum- UM control chart has the lowest ARL1 but when process variance shifts by a large amount

    1. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

      Science.gov (United States)

      Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

      2011-01-01

      Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

    2. MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.

      Science.gov (United States)

      Bao, Hua; Kommadath, Arun; Plastow, Graham S; Tuggle, Christopher K; Guan, Le Luo; Stothard, Paul

      2014-01-01

      One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs) are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved) miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved) miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.

    3. MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.

      Directory of Open Access Journals (Sweden)

      Hua Bao

      Full Text Available One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.

    4. Lower within-community variance of negative density dependence increases forest diversity.

      Directory of Open Access Journals (Sweden)

      António Miranda

      Full Text Available Local abundance of adult trees impedes growth of conspecific seedlings through host-specific enemies, a mechanism first proposed by Janzen and Connell to explain plant diversity in forests. While several studies suggest the importance of this mechanism, there is still little information of how the variance of negative density dependence (NDD affects diversity of forest communities. With computer simulations, we analyzed the impact of strength and variance of NDD within tree communities on species diversity. We show that stronger NDD leads to higher species diversity. Furthermore, lower range of strengths of NDD within a community increases species richness and decreases variance of species abundances. Our results show that, beyond the average strength of NDD, the variance of NDD is also crucially important to explain species diversity. This can explain the dissimilarity of biodiversity between tropical and temperate forest: highly diverse forests could have lower NDD variance. This report suggests that natural enemies and the variety of the magnitude of their effects can contribute to the maintenance of biodiversity.

    5. Perspectives on Prediction Variance and Bias in Developing, Assessing, and Comparing Experimental Designs

      Energy Technology Data Exchange (ETDEWEB)

      Piepel, Gregory F.

      2010-12-01

      The vast majority of response surface methods used in practice to develop, assess, and compare experimental designs focus on variance properties of designs. Because response surface models only approximate the true unknown relationships, models are subject to bias errors as well as variance errors. Beginning with the seminal paper of Box and Draper (1959) and over the subsequent 50 years, methods that consider bias and mean-squared-error (variance and bias) properties of designs have been presented in the literature. However, these methods are not widely implemented in software and are not routinely used to develop, assess, and compare experimental designs in practice. Methods for developing, assessing, and comparing response surface designs that account for variance properties are reviewed. Brief synopses of publications that consider bias or mean-squared-error properties are provided. The difficulties and approaches for addressing bias properties of designs are summarized. Perspectives on experimental design methods that account for bias and/or variance properties and on future needs are presented.

    6. Regional flood frequency analysis based on a Weibull model: Part 1. Estimation and asymptotic variances

      Science.gov (United States)

      Heo, Jun-Haeng; Boes, D. C.; Salas, J. D.

      2001-02-01

      Parameter estimation in a regional flood frequency setting, based on a Weibull model, is revisited. A two parameter Weibull distribution at each site, with common shape parameter over sites that is rationalized by a flood index assumption, and with independence in space and time, is assumed. The estimation techniques of method of moments and method of probability weighted moments are studied by proposing a family of estimators for each technique and deriving the asymptotic variance of each estimator. Then a single estimator and its asymptotic variance for each technique, suggested by trying to minimize the asymptotic variance over the family of estimators, is obtained. These asymptotic variances are compared to the Cramer-Rao Lower Bound, which is known to be the asymptotic variance of the maximum likelihood estimator. A companion paper considers the application of this model and these estimation techniques to a real data set. It includes a simulation study designed to indicate the sample size required for compatibility of the asymptotic results to fixed sample sizes.

    7. Credit risk metric under the condition of mutative variance%变方差条件下的信用风险度量

      Institute of Scientific and Technical Information of China (English)

      韩立媛; 古志辉; 丁小培

      2012-01-01

      Based on the option pricing model, the paper uses smooth pasting to evaluate the debt value and probability of default under the condition of imitative variance. The conclusion are that the probability of default under conditions of invariable variance is less than under mutative variance. Furthermore, the probability of default will decrease with the risk-free rate, which means that the reduction of interest may lead to the rise of non-performing asset ratio.%以期权定价模型为基础,运用平滑粘贴条件求解了变方差条件下的负债价值及违约概率.研究表明方差不变条件下获得的违约概率低于变方差条件下的违约概率.随着无风险收益率的上升,债务违约概率将会减小,这说明政府降低利息率可能会导致银行不良资产比率上升.

    8. Advanced nanoelectronics

      CERN Document Server

      Ismail, Razali

      2012-01-01

      While theories based on classical physics have been very successful in helping experimentalists design microelectronic devices, new approaches based on quantum mechanics are required to accurately model nanoscale transistors and to predict their characteristics even before they are fabricated. Advanced Nanoelectronics provides research information on advanced nanoelectronics concepts, with a focus on modeling and simulation. Featuring contributions by researchers actively engaged in nanoelectronics research, it develops and applies analytical formulations to investigate nanoscale devices. The

    9. AdvancED Flex 4

      CERN Document Server

      Tiwari, Shashank; Schulze, Charlie

      2010-01-01

      AdvancED Flex 4 makes advanced Flex 4 concepts and techniques easy. Ajax, RIA, Web 2.0, mashups, mobile applications, the most sophisticated web tools, and the coolest interactive web applications are all covered with practical, visually oriented recipes. * Completely updated for the new tools in Flex 4* Demonstrates how to use Flex 4 to create robust and scalable enterprise-grade Rich Internet Applications.* Teaches you to build high-performance web applications with interactivity that really engages your users.* What you'll learn Practiced beginners and intermediate users of Flex, especially

    10. Estimation variance bounds of importance sampling simulations in digital communication systems

      Science.gov (United States)

      Lu, D.; Yao, K.

      1991-01-01

      In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

    11. A Simple Parametrization for the Concentration Variance Dissipation in a Lagrangian Single-Particle Model

      Science.gov (United States)

      Ferrero, Enrico; Mortarini, Luca; Purghè, Federico

      2016-11-01

      A model for the evaluation of the concentration fluctuation variance is coupled with a one-particle Lagrangian stochastic model and results compared to a wind-tunnel simulation experiment. In this model the concentration variance evolves along the particle trajectories according to the same Langevin equation used for the simulation of the velocity field, and its dissipation is taken into account through a decay term with a finite time scale. Indeed, while the mean concentration is conserved, the concentration variance is not and our model takes into account its dissipation. A simple parametrization for the dissipation time scale is proposed and it is found that it depends linearly on time and on the ratio between the size and the height of the source through a scaling factor of 1 / 3.

    12. Employing components-of-variance to evaluate forensic breath test instruments.

      Science.gov (United States)

      Gullberg, Rod G

      2008-03-01

      The evaluation of breath alcohol instruments for forensic suitability generally includes the assessment of accuracy, precision, linearity, blood/breath comparisons, etc. Although relevant and important, these methods fail to evaluate other important analytical and biological components related to measurement variability. An experimental design comparing different instruments measuring replicate breath samples from several subjects is presented here. Three volunteers provided n = 10 breath samples into each of six different instruments within an 18 minute time period. Two-way analysis of variance was employed which quantified the between-instrument effect and the subject/instrument interaction. Variance contributions were also determined for the analytical and biological components. Significant between-instrument and subject/instrument interaction were observed. The biological component of total variance ranged from 56% to 98% among all subject instrument combinations. Such a design can help quantify the influence of and optimize breath sampling parameters that will reduce total measurement variability and enhance overall forensic confidence.

    13. In-vivo three-dimensional Doppler variance imaging for tumor angiogenesis on chorioallantoic membrane

      Science.gov (United States)

      Qi, Wenjuan; Liu, Gangjun; Chen, Zhongping

      2011-03-01

      Non-invasive tumor microvasculature visualization and characterization play significant roles in the detection of tumors and importantly, for aiding in the development of therapeutic strategies. The feasibility and effectiveness of a Doppler variance standard deviation imaging method for tumor angiogenesis on chorioallantoic membrane were tested in vivo on a rat glioma F98 tumor spheroid. Utilizing a high resolution Doppler Variance Optical Coherence Tomography (DVOCT) system with A-line rate of 20 kHz, three-dimensional mapping of a tumor with a total area of 3×2.5mm2 was completed within 15 seconds. The top-view image clearly visualized the complex vascular perfusion with the detection of capillaries as small as approximately 10μm. The results of the current study demonstrate the capability of the Doppler variance standard deviation imaging method as a non-invasive assessment of tumor angiogenesis, with the potential for its use in clinical settings.

    14. Estimation of bias and variance of measurements made from tomography scans

      Science.gov (United States)

      Bradley, Robert S.

      2016-09-01

      Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

    15. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

      Directory of Open Access Journals (Sweden)

      Hui-qiang Ma

      2015-01-01

      Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

    16. Statistical modelling of tropical cyclone tracks: a comparison of models for the variance of trajectories

      CERN Document Server

      Hall, T; Hall, Tim; Jewson, Stephen

      2005-01-01

      We describe results from the second stage of a project to build a statistical model for hurricane tracks. In the first stage we modelled the unconditional mean track. We now attempt to model the unconditional variance of fluctuations around the mean. The variance models we describe use a semi-parametric nearest neighbours approach in which the optimal averaging length-scale is estimated using a jack-knife out-of-sample fitting procedure. We test three different models. These models consider the variance structure of the deviations from the unconditional mean track to be isotropic, anisotropic but uncorrelated, and anisotropic and correlated, respectively. The results show that, of these models, the anisotropic correlated model gives the best predictions of the distribution of future positions of hurricanes.

    17. NEW RESULTS ABOUT THE RELATIONSHIP BETWEEN OPTIMALLY WEIGHTED LEAST SQUARES ESTIMATE AND LINEAR MINIMUM VARIANCE ESTIMATE

      Institute of Scientific and Technical Information of China (English)

      Juan ZHAO; Yunmin ZHU

      2009-01-01

      The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.

    18. Vowel Reduction in Japanese

      Institute of Scientific and Technical Information of China (English)

      Shirai; Setsuko

      2009-01-01

      This paper reports the result that vowel reduction occurs in Japanese and vowel reduction is the part of the language universality.Compared with English,the effect of the vowel reduction in Japanese is relatively weak might because of the absence of stress in Japanese.Since spectral vowel reduction occurs in Japanese,various types of researches would be possible.

    19. The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance

      Science.gov (United States)

      López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.

      2015-10-01

      Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ zCSIC).

    20. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

      Energy Technology Data Exchange (ETDEWEB)

      Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

      2013-12-15

      This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

    1. Estimation of (co)variances for genomic regions of flexible sizes

      DEFF Research Database (Denmark)

      Sørensen, Lars P; Janss, Luc; Madsen, Per;

      2012-01-01

      traits such as mammary disease traits in dairy cattle. METHODS: Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model......)variances of mastitis resistance traits in dairy cattle using multivariate genomic models......., per chromosome, and in regions of 100 SNP on a chromosome. RESULTS: Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because...

    2. A Mean-Variance Hybrid-Entropy Model for Portfolio Selection with Fuzzy Returns

      Directory of Open Access Journals (Sweden)

      Rongxi Zhou

      2015-05-01

      Full Text Available In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM. A multi-objective genetic algorithm named Non-dominated Sorting Genetic Algorithm II (NSGA-II is introduced to solve the model. We make empirical comparisons by using the data from the Shanghai and Shenzhen stock exchanges in China. The results show that the MVHEM generally performs better than the traditional portfolio selection models.

    3. Discrete Time Mean-variance Analysis with Singular Second Moment Matrixes and an Exogenous Liability

      Institute of Scientific and Technical Information of China (English)

      Wen Cai CHEN; Zhong Xing YE

      2008-01-01

      We apply the dynamic programming methods to compute the analytical solution of the dynamic mean-variance optimization problem a.ected by an exogenous liability in a multi-periods market model with singular second moment matrixes of the return vector of assets. We use orthogonal transformations to overcome the difficulty produced by those singular matrixes, and the analytical form of the e.cient frontier is obtained. As an application, the explicit form of the optimal mean-variance hedging strategy is also obtained for our model.

    4.  Self-determination theory fails to explain additional variance in well-being

      DEFF Research Database (Denmark)

      Olesen, Martin Hammershøj; Schnieber, Anette; Tønnesvang, Jan

      2008-01-01

      This study investigates relations between the five-factor model (FFM) and self-determination theory in predicting well-being. Nine-hundred-and-sixty-four students completed e-based measures of extroversion & neuroticism (NEO-FFI); autonomous- & impersonal general causality orientation (GCOS......) and positive- & negative affect (PANAS). Correlation analysis showed moderate positive relationships between extroversion, autonomous and positive affect; neuroticism, impersonal and negative affect. Regression analysis revealed that autonomous explained additional 2% of variance in positive affect, when...... controlling for extroversion (Pneuroticism. Self-Determination Theory seems inadequate in explaining variance in well-being supporting an integration with FFM....

    5. Fuzzy cross-entropy, mean, variance, skewness models for portfolio selection

      Directory of Open Access Journals (Sweden)

      Rupak Bhattacharyya

      2014-01-01

      Full Text Available In this paper, fuzzy stock portfolio selection models that maximize mean and skewness as well as minimize portfolio variance and cross-entropy are proposed. Because returns are typically asymmetric, in addition to typical mean and variance considerations, third order moment skewness is also considered in generating a larger payoff. Cross-entropy is used to quantify the level of discrimination in a return for a given satisfactory return value. As returns are uncertain, stock returns are considered triangular fuzzy numbers. Stock price data from the Bombay Stock Exchange are used to illustrate the effectiveness of the proposed model. The solutions are done by genetic algorithms.

    6. Effect of window shape on the detection of hyperuniformity via the local number variance

      Science.gov (United States)

      Kim, Jaeuk; Torquato, Salvatore

      2017-01-01

      Hyperuniform many-particle systems in d-dimensional space {{{R}}d} , which includes crystals, quasicrystals, and some exotic disordered systems, are characterized by an anomalous suppression of density fluctuations at large length scales such that the local number variance within a ‘spherical’ observation window grows slower than the window volume. In usual circumstances, this direct-space condition is equivalent to the Fourier-space hyperuniformity condition that the structure factor vanishes as the wavenumber goes to zero. In this paper, we comprehensively study the effect of aspherical window shapes with characteristic size L on the direct-space condition for hyperuniform systems. For lattices, we demonstrate that the variance growth rate can depend on the shape as well as the orientation of the windows, and in some cases, the growth rate can be faster than the window volume (i.e. L d ), which may lead one to falsely conclude that the system is non-hyperuniform solely according to the direct-space condition. We begin by numerically investigating the variance of two-dimensional lattices using ‘superdisk’ windows, whose convex shapes continuously interpolate between circles (p  =  1) and squares (p\\to ∞ ), as prescribed by a deformation parameter p, when the superdisk symmetry axis is aligned with the lattice. Subsequently, we analyze the variance for lattices as a function of the window orientation, especially for two-dimensional lattices using square windows (superdisk when p\\to ∞ ). Based on this analysis, we explain the reason why the variance for d  =  2 can grow faster than the window area or even slower than the window perimeter (e.g. like \\ln (L) ). We then study the generalized condition of the window orientation, under which the variance can grow as fast as or faster than L d (window volume), to the case of Bravais lattices and parallelepiped windows in {{{R}}d} . In the case of isotropic disordered hyperuniform systems, we

    7. Simulation of Longitudinal Exposure Data with Variance-Covariance Structures Based on Mixed Models

      Science.gov (United States)

      2013-01-01

      subjects ( intersubject ) and that within subjects (intrasubject). Then, we can model several types of correlations within each subject as necessary, to...discriminates intersubject and intrasubject variances, by splitting εij into two terms: yij =μ+ bi + eij,bi ∼ N(0,σ 2b ),eij ∼ N ( 0,σ 2e ) , (2) where bi is the...1 ρ ρ2 ρ3 ρ 1 ρ ρ2 ρ2 ρ 1 ρ ρ3 ρ2 ρ 1 ⎞⎟⎟⎟⎟⎟⎠ . (5) Among the two matrices in Equation (5), the first one defines intersubject variances

    8. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

      Science.gov (United States)

      Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

      2011-07-01

      Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

    9. Use of genomic models to study genetic control of environmental variance

      DEFF Research Database (Denmark)

      Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

      2011-01-01

      of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted....... The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...

    10. Variances as order parameter and complexity measure for random Boolean networks

      Energy Technology Data Exchange (ETDEWEB)

      Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)

      2005-02-04

      Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.

    11. Vector FIGARCH process, its persistence and co-persistence in variance

      Institute of Scientific and Technical Information of China (English)

      LI Song-chen; ZHANG Shi-ying

      2006-01-01

      In this paper,the definition of the vector FIGARCH process is established,and the stationarity and some properties of the process are discussed.According to the stationarity and the results of Du and Zhang [1],we verify the persistence in variance of the vector FIGARCH process,and finally establish the sufficient and necessary condition for the co-persistence in the variance of the process and also discuss the constant related vector FIGARCH(p,d,q) process as a special case.

    12. A Realized Variance for the Whole Day Based on Intermittent High-Frequency Data

      DEFF Research Database (Denmark)

      Hansen, Peter Reinhard; Lunde, Asger

      2005-01-01

      We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show that the opti......We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show...

    13. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

      NARCIS (Netherlands)

      Ketema, Jeroen; Simonsen, Jakob Grue

      2010-01-01

      We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in fi

    14. Acute reduction of serum 8-iso-PGF2-alpha and advanced oxidation protein products in vivo by a polyphenol-rich beverage; a pilot clinical study with phytochemical and in vitro antioxidant characterization

      Directory of Open Access Journals (Sweden)

      DiSilvestro Robert

      2011-06-01

      Full Text Available Abstract Background Measuring the effects of the acute intake of natural products on human biomarker concentrations, such as those related to oxidation and inflammation, can be an advantageous strategy for early clinical research on an ingredient or product. Methods 31 total healthy subjects were randomized in a double-blinded, placebo-controlled, acute pilot study with post-hoc subgroup analysis on 20 of the subjects. The study examined the effects of a single dose of a polyphenol-rich beverage (PRB, commercially marketed as "SoZo®", on serum anti-inflammatory and antioxidant markers. In addition, phytochemical analyses of PRB, and in vitro antioxidant capacity were also performed. Results At 1 hour post-intake, serum values for 8-iso-PGF2-alpha and advanced oxidation protein products decreased significantly by 40% and 39%, respectively. Additionally, there was a trend toward decreased C-reactive protein, and increased nitric oxide levels. Both placebo and PRB treatment resulted in statistically significant increases in hydroxyl radical antioxidant capacity (HORAC compared to baseline; PRB showed a higher percent change (55-75% versus 23-74% in placebo group, but the two groups did not differ significantly from each other. Conclusions PRB produced statistically significant changes in several blood biomarkers related to antioxidant/anti-inflammatory effects. Future studies are justified to verify results and test for cumulative effects of repeated intakes of PRB. The study demonstrates the potential utility of acute biomarker measurements for evaluating antioxidant/anti-inflammatory effects of natural products.

    15. A conceptual framework for noise reduction

      CERN Document Server

      Benesty, Jacob

      2015-01-01

      Though noise reduction and speech enhancement problems have been studied for at least five decades, advances in our understanding and the development of reliable algorithms are more important than ever, as they support the design of tailored solutions for clearly defined applications. In this work, the authors propose a conceptual framework that can be applied to the many different aspects of noise reduction, offering a uniform approach to monaural and binaural noise reduction problems, in the time domain and in the frequency domain, and involving a single or multiple microphones. Moreover, the derivation of optimal filters is simplified, as are the performance measures used for their evaluation.

    16. Research Advances in Carbon Emission Reduction in Dairy Production from 2010 to 2011%2010年至2011年奶牛养殖中碳减排的研究概况

      Institute of Scientific and Technical Information of China (English)

      王笑笑; 高腾云; 秦雯霄

      2012-01-01

      Carbon emission is a general or shorted term of greenhouse gas emission. In the changing context of global climate, carboh emission space will gradually become the resource constraint of economic development, and the low-carbon emission from agriculture will also become imperative. In the inventory of greenhouse gas emissions that released by the Intergovernmental Panel on Climate Change (IPCC) , CH4 emissions from ruminant gastrointestinal fermentation and the CH4 and N2O emissions from the manure management systems are the major emission sources of greenhouse gases in agriculture. Dairy production as one of the main form of ruminant production, combined with its huge dung emissions, it is essential to realize the environment-friendly development of dairy production that to reduce greenhouse gas emissions in dairy production. In order to help people realize dairy production turning into low-carbon and ecotype, researches focused on reducing carbon e-mission in dairy production from 2010 to 2011 are summarized in nutrition regulation, carbon emission reduction in feces, the carbon emission mitigation model in dairy industry, emission reduction technologies and rules of greenhouse gas and other aspects in this review.%碳排放是关于温室气体排放的总称或简称.在全球气候变化背景下,碳排放空间将逐渐成为经济发展关键的资源约束,农业的低碳排放亦势在必行.在政府间气候变化委员会(IPCC)发布的温室气体排放清单中,反刍动物肠胃发酵甲烷排放与粪便管理系统中的甲烷和一氧化二氮的排放均是农业温室气体的主要排放源,奶牛生产作为反刍动物生产的主要形式之一加之其巨大的粪污排放量,如何实现奶牛生产中温室气体减排对于奶牛生产实现环境友好型发展至关重要.本文从营养调控、减少粪便中碳排放、奶牛业碳减排模型、温室气体减排技术与减排规律等方面对2010年至2011年国内外致力于

    17. Advances research on bronchoscopic lung volume reduction surgery for obstructive airway diseases%内科肺减容术在慢性阻塞性肺疾病最新研究进展

      Institute of Scientific and Technical Information of China (English)

      谢栓栓; 王昌惠

      2013-01-01

      阻塞性呼吸道疾病是多种疾病组成的,但它们都可因炎症导致气道狭窄,从而导致呼吸做功增加.由于其患病人数多,病死率高,严重影响患者的劳动能力和生活质量.不同群体的哮喘、慢性支气管炎和肺气肿最佳治疗策略应该是多方面的,如高危肺气肿患者应包括药物学和非药物方法以及手术治疗.回顾当前支气管镜介入水平,近十年其发展目标是更好地控制哮喘症状和缓解由于不适合肺减容手术的肺气肿患者症状,由此可见,新型支气管镜技术针对气道阻塞性疾病治疗有很大帮助.%Obstructivepulmonary disease is composed of a variety of diseases,nevertheless,they are able to induce the airway narrow and result in increase of work of breathing force.The incidence and mortality are high,which seriously influence the patients' ability to work and quality of life.There are many optimal treatment strategies of asthma,chronic bronchitis,and emphysema in different groups,such as pharmacological and non-pharmacological methods as well as surgery for high-risk patients with emphysema.In recent decades,the development objective of bronchoscopic intervention is to better control asthma symptoms and relieve symptoms of patients with emphysema who are not suitable for the lung volume reduction surgery.This demonstrates that new bronchoscopic techniques will be helpful for treatment of airway obstruction disease in future.

    18. Advanced calculus

      CERN Document Server

      Nickerson, HK; Steenrod, NE

      2011-01-01

      ""This book is a radical departure from all previous concepts of advanced calculus,"" declared the Bulletin of the American Mathematics Society, ""and the nature of this departure merits serious study of the book by everyone interested in undergraduate education in mathematics."" Classroom-tested in a Princeton University honors course, it offers students a unified introduction to advanced calculus. Starting with an abstract treatment of vector spaces and linear transforms, the authors introduce a single basic derivative in an invariant form. All other derivatives - gradient, divergent, curl,

    19. A comparison of vertical velocity variance measurements from wind profiling radars and sonic anemometers

      Science.gov (United States)

      McCaffrey, Katherine; Bianco, Laura; Johnston, Paul; Wilczak, James M.

      2017-03-01

      Observations of turbulence in the planetary boundary layer are critical for developing and evaluating boundary layer parameterizations in mesoscale numerical weather prediction models. These observations, however, are expensive and rarely profile the entire boundary layer. Using optimized configurations for 449 and 915 MHz wind profiling radars during the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA), improvements have been made to the historical methods of measuring vertical velocity variance through the time series of vertical velocity, as well as the Doppler spectral width. Using six heights of sonic anemometers mounted on a 300 m tower, correlations of up to R2 = 0. 74 are seen in measurements of the large-scale variances from the radar time series and R2 = 0. 79 in measurements of small-scale variance from radar spectral widths. The total variance, measured as the sum of the small and large scales, agrees well with sonic anemometers, with R2 = 0. 79. Correlation is higher in daytime convective boundary layers than nighttime stable conditions when turbulence levels are smaller. With the good agreement with the in situ measurements, highly resolved profiles up to 2 km can be accurately observed from the 449 MHz radar and 1 km from the 915 MHz radar. This optimized configuration will provide unique observations for the verification and improvement to boundary layer parameterizations in mesoscale models.

    20. Within-generation mutation variance for litter size in inbred mice.

      Science.gov (United States)

      Casellas, Joaquim; Medrano, Juan F

      2008-08-01

      The mutational input of genetic variance per generation (sigma(m)(2)) is the lower limit of the genetic variability in inbred strains of mice, although greater values could be expected due to the accumulation of new mutations in successive generations. A mixed-model analysis using Bayesian methods was applied to estimate sigma(m)(2) and the across-generation accumulated genetic variability on litter size in 46 generations of a C57BL/6J inbred strain. This allowed for a separate inference on sigma(m)(2) and on the additive genetic variance in the base population (sigma(a)(2)). The additive genetic variance in the base generation was 0.151 and quickly decreased to almost null estimates in generation 10. On the other hand, sigma(m)(2) was moderate (0.035) and the within-generation mutational variance increased up to generation 14, then oscillating between 0.102 and 0.234 in remaining generations. This pattern suggested the existence of a continuous uploading of genetic variability for litter size (h(2)=0.045). Relevant genetic drift was not detected in this population. In conclusion, our approach allowed for separate estimation of sigma(a)(2) and sigma(m)(2) within the mixed-model framework, and the heritability obtained highlighted the significant and continuous influence of new genetic variability affecting the genetic stability of inbred strains.

    1. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

      Science.gov (United States)

      Finch, W. Holmes

      2016-01-01

      Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

    2. 40 CFR 260.31 - Standards and criteria for variances from classification as a solid waste.

      Science.gov (United States)

      2010-07-01

      ... from classification as a solid waste. 260.31 Section 260.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.31 Standards and criteria for variances from classification as a solid waste. (a)...

    3. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

      Science.gov (United States)

      Ben Taieb, Souhaib; Atiya, Amir F

      2016-01-01

      Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

    4. CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis

      CERN Document Server

      Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,

      2011-01-01

      We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...

    5. Quantitative milk genomics: estimation of variance components and prediction of fatty acids in bovine milk

      DEFF Research Database (Denmark)

      Krag, Kristian

      The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...

    6. A Mean-Variance Explanation of FDI Flows to Developing Countries

      DEFF Research Database (Denmark)

      Sunesen, Eva Rytter

      country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

    7. A zero-variance based scheme for Monte Carlo criticality simulations

      NARCIS (Netherlands)

      Christoforou, S.

      2010-01-01

      The ability of the Monte Carlo method to solve particle transport problems by simulating the particle behaviour makes it a very useful technique in nuclear reactor physics. However, the statistical nature of Monte Carlo implies that there will always be a variance associated with the estimate obtain

    8. Structure analysis of interstellar clouds: I. Improving the Delta-variance method

      CERN Document Server

      Ossenkopf, V; Stutzki, J

      2008-01-01

      The Delta-variance analysis, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. The implementation presently in use, however, has several shortcomings. We propose and test an improved Delta-variance algorithm for two-dimensional data sets, which is applicable to maps with variable error bars and which can be quickly computed in Fourier space. We calibrate the spatial resolution of the Delta-variance spectra. The new Delta-variance algorithm is based on an appropriate filtering of the data in Fourier space. It allows us to distinguish the influence of variable noise from the actual small-scale structure in the maps and it helps for dealing with the boundary problem in non-periodic and/or irregularly bounded maps. We try several wavelets and test their spatial sensitivity using artificial maps with well known structure sizes. It turns out that different wavelets show different strengths with respect to detecting characteristic structures and spectr...

    9. 76 FR 18921 - Land Disposal Restrictions: Nevada and California; Site Specific Treatment Variances for...

      Science.gov (United States)

      2011-04-06

      ... practiced in the United States.\\1\\ \\1\\ Because selenium is a non-renewable resource, and because the wastes... Division, Office of Resource Conservation and Recovery (MC 5304 P), U.S. Environmental Protection Agency... Disposal Restrictions Treatment Variances Under sections 3004(d) through (g) of the Resource...

    10. Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models

      DEFF Research Database (Denmark)

      Lanne, Markku; Nyberg, Henri

      We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use...

    11. Variance of Fluctuating Radar Echoes from Thermal Noise and Randomly Distributed Scatterers

      Directory of Open Access Journals (Sweden)

      Marco Gabella

      2014-02-01

      Full Text Available In several cases (e.g., thermal noise, weather echoes, …, the incoming signal to a radar receiver can be assumed to be Rayleigh distributed. When estimating the mean power from the inherently fluctuating Rayleigh signals, it is necessary to average either the echo power intensities or the echo logarithmic levels. Until now, it has been accepted that averaging the echo intensities provides smaller variance values, for the same number of independent samples. This has been known for decades as the implicit consequence of two works that were presented in the open literature. The present note deals with the deriving of analytical expressions of the variance of the two typical estimators of mean values of echo power, based on echo intensities and echo logarithmic levels. The derived expressions explicitly show that the variance associated to an average of the echo intensities is lower than that associated to an average of logarithmic levels. Consequently, it is better to average echo intensities rather than logarithms. With the availability of digital IF receivers, which facilitate the averaging of echo power, the result has a practical value. As a practical example, the variance obtained from two sets of noise samples, is compared with that predicted with the analytical expression derived in this note (Section 3: the measurements and theory show good agreement.

    12. Variance estimators in critical branching processes with non-homogeneous immigration

      CERN Document Server

      Rahimov, Ibrahim

      2012-01-01

      The asymptotic normality of conditional least squares estimators for the offspring variance in critical branching processes with non-homogeneous immigration is established, under moment assumptions on both reproduction and immigration. The proofs use martingale techniques and weak convergence results in Skorokhod spaces.

    13. The concordance correlation coefficient for repeated measures estimated by variance components.

      Science.gov (United States)

      Carrasco, Josep L; King, Tonya S; Chinchilli, Vernon M

      2009-01-01

      The concordance correlation coefficient (CCC) is an index that is commonly used to assess the degree of agreement between observers on measuring a continuous characteristic. Here, a CCC for longitudinal repeated measurements is developed through the appropriate specification of the intraclass correlation coefficient from a variance components linear mixed model. A case example and the results of a simulation study are provided.

    14. Selection for uniformity in livestock by exploiting genetic heterogeneity of environmental variance

      NARCIS (Netherlands)

      Mulder, H.A.; Bijma, P.; Hill, W.G.

      2008-01-01

      In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters, b

    15. A Program to Perform Analyses of Variance for Data from Round-robin Experiments

      Science.gov (United States)

      Gleason, John R.

      1976-01-01

      A round-robin experiment involves observation of all possible pairs of subjects within each experimental condition. A program is described which performs analyses of variance for such data. Output includes an ANOVA summary table, exact or quasi-F statistics for tests of various hypotheses, and least squares estimates of relevant parameters.…

    16. Evolution of Robustness and Plasticity under Environmental Fluctuation: Formulation in Terms of Phenotypic Variances

      Science.gov (United States)

      Kaneko, Kunihiko

      2012-09-01

      The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The relationship suggests a link between robustness to noise and to mutation, since robustness can be defined by the sharpness of the distribution of the phenotype. Next, the proportionality between the variances is demonstrated to also hold over expressions of different genes (phenotypic traits) when the system acquires robustness through the evolution. Then, evolution under environmental variation is numerically investigated and it is found that both the adaptability to a novel environment and the robustness are made compatible when a certain degree of phenotypic fluctuations exists due to noise. The highest adaptability is achieved at a certain noise level at which the gene expression dynamics are near the critical state to lose the robustness. Based on our results, we revisit Waddington's canalization and genetic assimilation with regard to the two types of phenotypic fluctuations.

    17. Use of hypotheses for analysis of variance models: challenging the current practice

      NARCIS (Netherlands)

      van Wesel, F.; Boeije, H.R.; Hoijtink, H.

      2013-01-01

      In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current practic

    18. 40 CFR 260.32 - Variances to be classified as a boiler.

      Science.gov (United States)

      2010-07-01

      ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Variances to be classified as a boiler... be classified as a boiler. In accordance with the standards and criteria in § 260.10 (definition of “boiler”), and the procedures in § 260.33, the Administrator may determine on a case-by-case basis...

    19. A Note on the Mean-Variance Criteria for Discrete Time Financial Markets

      Institute of Scientific and Technical Information of China (English)

      Xin-hua Liu

      2005-01-01

      It was shown in Xia[3] that for incomplete markets with continuous assets' price processes and for complete markets the mean-variance portfolio selection can be viewed as expected utility maximization with non-negative marginal utility. In this paper we show that for discrete time incomplete markets this result is not true.

    20. Rapid Divergence of Genetic Variance-Covariance Matrix within a Natural Population

      NARCIS (Netherlands)

      Doroszuk, A.; Wojewodzic, M.W.; Gort, G.; Kammenga, J.E.

      2008-01-01

      The matrix of genetic variances and covariances (G matrix) represents the genetic architecture of multiple traits sharing developmental and genetic processes and is central for predicting phenotypic evolution. These predictions require that the G matrix be stable. Yet the timescale and conditions pr

    1. Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River

      Science.gov (United States)

      Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.

      2002-05-01

      We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.

    2. Analysis of Variance with Summary Statistics in Microsoft® Excel®

      Science.gov (United States)

      Larson, David A.; Hsu, Ko-Cheng

      2010-01-01

      Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

    3. Adding a Parameter Increases the Variance of an Estimated Regression Function

      Science.gov (United States)

      Withers, Christopher S.; Nadarajah, Saralees

      2011-01-01

      The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…

    4. Asymmetries in conditional mean and variance: Modelling stock returns by asMA-asQGARCH

      NARCIS (Netherlands)

      Brännäs, K.; de Gooijer, J.G.

      2000-01-01

      The asymmetric moving average model (asMA) is extended to allow for asymmetric quadratic conditional heteroskedasticity (asQGARCH). The asymmetric parametrization of the condi- tional variance encompasses the quadratic GARCH model of Sentana (1995). We introduce a framework for testing asymmetries i

    5. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

      NARCIS (Netherlands)

      Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)

      2016-01-01

      textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main fac

    6. Mean-Coherent Risk and Mean-Variance Approaches in Portfolio Selection : An Empirical Comparison

      NARCIS (Netherlands)

      Polbennikov, S.Y.; Melenberg, B.

      2005-01-01

      We empirically analyze the implementation of coherent risk measures in portfolio selection.First, we compare optimal portfolios obtained through mean-coherent risk optimization with corresponding mean-variance portfolios.We find that, even for a typical portfolio of equities, the outcomes can be sta

    7. Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH

      NARCIS (Netherlands)

      de Gooijer, J.G.; Brännäs, K.

      2004-01-01

      We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad new

    8. The modified Black-Scholes model via constant elasticity of variance for stock options valuation

      Science.gov (United States)

      Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.

      2016-02-01

      In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.

    9. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

      DEFF Research Database (Denmark)

      Ibáñez-Escriche, N; Garcia, M; Sorensen, D

      2010-01-01

      This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely availab...

    10. Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation

      NARCIS (Netherlands)

      Cong, F.; Oosterlee, C.W.

      2016-01-01

      We propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then, based on this fa

    11. A Test for Mean-Variance Efficiency of a given Portfolio under Restrictions

      NARCIS (Netherlands)

      G.T. Post (Thierry)

      2005-01-01

      textabstractThis study proposes a test for mean-variance efficiency of a given portfolio under general linear investment restrictions. We introduce a new definition of pricing error or “alpha” and as an efficiency measure we propose to use the largest positive alpha for any vertex of the portfolio p

    12. A Demonstration of the Analysis of Variance Using Physical Movement and Space

      Science.gov (United States)

      Owen, William J.; Siakaluk, Paul D.

      2011-01-01

      Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

    13. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

      Science.gov (United States)

      Schwartz, Todd A.

      2013-01-01

      I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

    14. Pairwise Comparison Procedures for One-Way Analysis of Variance Designs. Research Report.

      Science.gov (United States)

      Zwick, Rebecca

      Research in the behavioral and health sciences frequently involves the application of one-factor analysis of variance models. The goal may be to compare several independent groups of subjects on a quantitative dependent variable or to compare measurements made on a single group of subjects on different occasions or under different conditions. In…

    15. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

      Science.gov (United States)

      Miller, Geoffrey F.; Penke, Lars

      2007-01-01

      Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

    16. Variance component estimations and allocation of resources for breeding sweetpotato under East African conditions

      NARCIS (Netherlands)

      Grüneberg, W.J.; Abidin, P.E.; Ndolo, P.; Pereira, C.A.; Hermann, M.

      2004-01-01

      In Africa, average sweetpotato storage root yields are low and breeding is considered to be an important factor in increasing production. The objectives of this study were to obtain variance component estimations for sweetpotato in this region of the world and then use these to determine the efficie

    17. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

      Science.gov (United States)

      Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

      2004-01-01

      This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

    18. The ALHAMBRA survey : Estimation of the clustering signal encoded in the cosmic variance

      CERN Document Server

      López-Sanjuan, C; Hernández-Monteagudo, C; Arnalte-Mur, P; Varela, J; Viironen, K; Fernández-Soto, A; Martínez, V J; Alfaro, E; Ascaso, B; del Olmo, A; Díaz-García, L A; Hurtado-Gil, Ll; Moles, M; Molino, A; Perea, J; Pović, M; Aguerri, J A L; Aparicio-Villegas, T; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Castander, F J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Delgado, R M González; Husillos, C; Infante, L; Márquez, I; Masegosa, J; Prada, F; Quintana, J M

      2015-01-01

      The relative cosmic variance ($\\sigma_v$) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the $\\sigma_v$ measured in the ALHAMBRA survey. We measure the cosmic variance of several galaxy populations selected with $B-$band luminosity at $0.35 \\leq z < 1.05$ as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational $\\sigma_v$ with the cosmic variance of the dark matter expected from the theory, $\\sigma_{v,{\\rm dm}}$. This provides an estimation of the galaxy bias $b$. The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several ...

    19. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

      Science.gov (United States)

      Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

      2015-12-01

      Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

    20. 铜冶炼行业节能减排先进适用技术评估方法研究%Study on Evaluation Method of Advanced and Appropriate Technologies for Energy Saving and Emission Reduction of Copper Smelting Industry

      Institute of Scientific and Technical Information of China (English)

      赵吝加; 曾维华; 许乃中; 温宗国

      2012-01-01

      针对目前国内环境技术评估以专家定性判断为主、缺乏定量技术评估手段的现状,通过设置节能减排先进适用技术指标体系、确定指标权重、构建评估因素集及其隶属函数等过程,建立定性与定量方法相结合的基于层次分析和模糊综合评估的铜冶炼行业节能减排先进适用技术评估方法,综合评估铜冶炼技术.应用该方法,从闪速熔炼、富氧侧吹熔池熔炼、艾萨/奥斯麦特熔炼、氧气底吹熔炼、白银炼铜、诺兰达炼铜等6项技术中,筛选出前4种作为铜冶炼行业重点推广的节能减排先进适用技术.%To improve the current practice of environmental technologies evaluation through experts' qualitative estimate and address the lack of quantitative evaluation methods, an qualitative and quantitative evaluation method, based on AHP and Fuzzy comprehensive evaluation, of advanced and appropriate technologies for energy saving and emission reduction of copper smelting industry was established including the setting of an indicator system of energy saving and emission reduction , determining the indicator weight factor, and structuring the evaluation factors set and its membership function. With this evaluation method, top four of six technologies including Flash Smelting, Oxygen Enrichment Side Blowing Bath Melting, Isasmelt/Ausmelt Smelting, Oxygen Bottom Blowing Smelting, Baiyin Copper Smelting and Noranda Copper Smelting, are generalized as the advanced and appropriate technologies for energy saving and emission reduction of copper smelting industry.