Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
International Nuclear Information System (INIS)
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)
MCNP variance reduction overview
International Nuclear Information System (INIS)
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Mira Antonietta; Tenconi Paolo; Bressanini Dario
2003-01-01
We propose a general purpose variance reduction technique for MCMC estimators. The idea is obtained by combining standard variance reduction principles known for regular Monte Carlo simulations (Ripley, 1987) and the Zero-Variance principle introduced in the physics literature (Assaraf and Caffarel, 1999). The potential of the new idea is illustrated with some toy examples and an application to Bayesian estimation
Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP
Energy Technology Data Exchange (ETDEWEB)
Edward W. Larsen
2008-06-01
The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due
A Hilbert Space Approach to Variance Reduction
Szechtman, Roberto
2006-01-01
Elsevier Handbooks in Operations Research and Management Science: Simulation, pp 259-289. In this chapter we explain variance reduction techniques from the Hilbert space standpoint, in the terminating simulation context. We use projection ideas to explain how variance is reduced, and to link different variance reduction techniques. Our focus is on the methods of control variates, conditional Monte Carlo, weighted Monte Carlo, stratification, and Latin hypercube sampling.
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are...
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Dimension reduction based on weighted variance estimate
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
International Nuclear Information System (INIS)
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
Methods for variance reduction in Monte Carlo simulations
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Stochastic Variance Reduction Methods for Saddle-Point Problems
Balamurugan, P.; Bach, Francis
2016-01-01
We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monot...
Fringe biasing: A variance reduction technique for optically thick meshes
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
A comparison of variance reduction techniques for radar simulation
Divito, A.; Galati, G.; Iovino, D.
Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.
Calculation of Scale of Fluctuation and Variance Reduction Function
Institute of Scientific and Technical Information of China (English)
Yan Shuwang; Guo Linping
2015-01-01
The scale of fluctuation is one of the vital parameters for the application of random field theory to the reli-ability analysis of geotechnical engineering. In the present study, the fluctuation function method and weighted curve fitting method were presented to make the calculation more simple and accurate. The vertical scales of fluctuation of typical layers of Tianjin Port were calculated based on a number of engineering geotechnical investigation data, which can be guidance to other projects in this area. Meanwhile, the influences of sample interval and type of soil index on the scale of fluctuation were analyzed, according to which, the principle of determining the scale of fluctuation when the sample interval changes was defined. It can be obtained that the scale of fluctuation is the basic attribute reflecting spatial variability of soil, therefore, the scales of fluctuation calculated according to different soil indexes should be basically the same. The non-correlation distance method was improved, and the principle of determining the variance reduction function was also discussed.
Problems of variance reduction in the simulation of random variables
International Nuclear Information System (INIS)
The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced
Variance reduction techniques in the simulation of Markov processes
International Nuclear Information System (INIS)
We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space
Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse
Gambhir, Arjun Singh; Orginos, Kostas
2016-01-01
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...
Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.
2012-01-01
Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.
Medina, Juan Camilo
This dissertation offers computational and theoretical advances for optimization under uncertainty problems that utilize a probabilistic framework for addressing such uncertainties, and adopt a probabilistic performance as objective function. Emphasis is placed on applications that involve potentially complex numerical and probability models. A generalized approach is adopted, treating the system model as a "black-box" and relying on stochastic simulation for evaluating the probabilistic performance. This approach can impose, though, an elevated computational cost, and two of the advances offered in this dissertation aim at decreasing the computational burden associated with stochastic simulation when integrated with optimization applications. The first one develops an adaptive implementation of importance sampling (a popular variance reduction technique) by sharing information across the iterations of the numerical optimization algorithm. The system model evaluations from the current iteration are utilized to formulate importance sampling densities for subsequent iterations with only a small additional computational effort. The characteristics of these densities as well as the specific model parameters these densities span are explicitly optimized. The second advancement focuses on adaptive tuning of a kriging metamodel to replace the computationally intensive system model. A novel implementation is considered, establishing a metamodel with respect to both the uncertain model parameters as well as the design variables, offering significant computational savings. Additionally, the adaptive selection of certain characteristics of the metamodel, such as support points or order of basis functions, is considered by utilizing readily available information from the previous iteration of the optimization algorithm. The third advancement extends to a different application and considers the assessment of the appropriateness of different candidate robust designs. A novel
Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki
2016-11-01
Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Energy Technology Data Exchange (ETDEWEB)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
International Nuclear Information System (INIS)
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases
International Nuclear Information System (INIS)
Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation
Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.
1998-03-01
`MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
Energy Technology Data Exchange (ETDEWEB)
Blakeman, Edward D [ORNL; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Murphy, Brian D [ORNL; Mueller, Don [ORNL
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.
Adewunmi, Adrian; Byrne, Mike
2008-01-01
This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.
Directory of Open Access Journals (Sweden)
Vincenza Di Stefano
2009-11-01
Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.
International Nuclear Information System (INIS)
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool
Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct
Energy Technology Data Exchange (ETDEWEB)
Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)
1998-03-01
Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)
Importance Sampling Variance Reduction for the Fokker-Planck Rarefied Gas Particle Method
Collyer, Benjamin; Lockerby, Duncan
2015-01-01
Models and methods that are able to accurately and efficiently predict the flows of low-speed rarefied gases are in high demand, due to the increasing ability to manufacture devices at micro and nano scales. One such model and method is a Fokker-Planck approximation to the Boltzmann equation, which can be solved numerically by a stochastic particle method. The stochastic nature of this method leads to noisy estimates of the thermodynamic quantities one wishes to sample when the signal is small in comparison to the thermal velocity of the gas. Recently, Gorji et al have proposed a method which is able to greatly reduce the variance of the estimators, by creating a correlated stochastic process which acts as a control variate for the noisy estimates. However, there are potential difficulties involved when the geometry of the problem is complex, as the method requires the density to be solved for independently. Importance sampling is a variance reduction technique that has already been shown to successfully redu...
Variance reduction techniques for a quantitative understanding of the \\Delta I = 1/2 rule
Endress, Eric
2012-01-01
The role of the charm quark in the dynamics underlying the \\Delta I = 1/2 rule for kaon decays can be understood by studying the dependence of kaon decay amplitudes on the charm quark mass using an effective \\Delta S = 1 weak Hamiltonian in which the charm is kept as an active degree of freedom. Overlap fermions are employed in order to avoid renormalization problems, as well as to allow access to the deep chiral regime. Quenched results in the GIM limit have shown that a significant part of the enhancement is purely due to low-energy QCD effects; variance reduction techniques based on low-mode averaging were instrumental in determining the relevant weak effective lowenergy couplings in this case. Moving away from the GIM limit requires the computation of diagrams containing closed quark loops. We report on our progress to employ a combination of low-mode averaging and stochastic volume sources in order to control these contributions. Results showing a significant improvement in the statistical signal are pre...
Application of variance reduction technique to nuclear transmutation system driven by accelerator
Energy Technology Data Exchange (ETDEWEB)
Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)
Advanced digital signal processing and noise reduction
Vaseghi, Saeed V
2008-01-01
Digital signal processing plays a central role in the development of modern communication and information processing systems. The theory and application of signal processing is concerned with the identification, modelling and utilisation of patterns and structures in a signal process. The observation signals are often distorted, incomplete and noisy and therefore noise reduction, the removal of channel distortion, and replacement of lost samples are important parts of a signal processing system. The fourth edition of Advanced Digital Signal Processing and Noise Reduction updates an
International Nuclear Information System (INIS)
We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)
Advanced sludge reduction and phosphorous removal process
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
An advanced sludge reduction process, i.e. sludge reduction and phosphorous removal process, was developed. The results show that excellent sludge reduction and biological phosphorous removal can be achieved perfectly in this system. When chemical oxygen demand ρ(COD) is 332 - 420 mg/L, concentration of ammonia ρ(NH3-N) is 30 - 40 mg/L and concentration of total phosphorous ρ(TP) is 6.0 - 9.0 mg/L in influent, the system still ensures ρ(COD)＜23 mg/L, ρ(NH3-N)＜3.2 mg/L and ρ(TP)＜0.72 mg/L in effluent. Besides, when the concentration of dissolved oxygen ρ(DO) is around 1.0 mg/L, sludge production is less than 0. 140 g with the consumption of 1 g COD, and the phosphorous removal exceeds 91%. Also, 48.4% of total nitrogen is removed by simultaneous nitrification and denitrification.
Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa
2014-07-01
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
A new noise reduction method for nonlinear signal based on maximum variance unfolding(MVU)is proposed.The noisy sig- nal is firstly embedded into a high-dimensional phase space based on phase space reconstruction theory,and then the manifold learning algorithm MVU is used to perform nonlinear dimensionality reduction on the data of phase space in order to separate low-dimensional manifold representing the attractor from noise subspace.Finally,the noise-reduced signal is obtained through reconstructing the low-dimensional manifold.The simulation results of Lorenz system show that the proposed MVU-based noise reduction method outperforms the KPCA-based method and has the advantages of simple parameter estimation and low parameter sensitivity.The proposed method is applied to fault detection of a vibration signal from rotor-stator of aero engine with slight rubbing fault.The denoised results show that the slight rubbing features overwhelmed by noise can be effectively extracted by the proposed noise reduction method.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Cycle update : advanced fuels and technologies for emissions reduction
Energy Technology Data Exchange (ETDEWEB)
Smallwood, G. [National Research Council of Canada, Ottawa, ON (Canada)
2009-07-01
This paper provided a summary of key achievements of the Program of Energy Research and Development advanced fuels and technologies for emissions reduction (AFTER) program over the funding cycle from fiscal year 2005/2006 to 2008/2009. The purpose of the paper was to inform interested parties of recent advances in knowledge and in science and technology capacities in a concise manner. The paper discussed the high level research and development themes of the AFTER program through the following 4 overarching questions: how could advanced fuels and internal combustion engine designs influence emissions; how could emissions be reduced through the use of engine hardware including aftertreatment devices; how do real-world duty cycles and advanced technology vehicles operating on Canadian fuels compare with existing technologies, models and estimates; and what are the health risks associated with transportation-related emissions. It was concluded that the main issues regarding the use of biodiesel blends in current technology diesel engines are the lack of consistency in product quality; shorter shelf life of biodiesel due to poorer oxidative stability; and a need to develop characterization methods for the final oxygenated product because most standard methods are developed for hydrocarbons and are therefore inadequate. 2 tabs., 13 figs.
Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs
International Nuclear Information System (INIS)
This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs
Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations
Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping
2016-01-01
The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.
Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project
National Aeronautics and Space Administration — In this project advanced acoustic blankets for improved low frequency interior noise control in aircraft will be developed and demonstrated. The improved...
Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project
National Aeronautics and Space Administration — The objective of the proposed Phase II research effort is to develop heterogeneous (HG) blankets for improved sound reduction in aircraft structures. Phase I...
Peter Carr; Liuren Wu
2004-01-01
We propose a direct and robust method for quantifying the variance risk premium on financial assets. We theoretically and numerically show that the risk-neutral expected value of the return variance, also known as the variance swap rate, is well approximated by the value of a particular portfolio of options. Ignoring the small approximation error, the difference between the realized variance and this synthetic variance swap rate quantifies the variance risk premium. Using a large options data...
Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells
Ali, Mohamed Mahmoud; Kvande, Halvor
2016-06-01
ABSTRACT There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.
Energy Technology Data Exchange (ETDEWEB)
Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.
2011-07-01
Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.
Ian Martin
2011-01-01
The large asset price jumps that took place during 2008 and 2009 disrupted volatility derivatives markets and caused the single-name variance swap market to dry up completely. This paper defines and analyzes a simple variance swap, a relative of the variance swap that in several respects has more desirable properties. First, simple variance swaps are robust: they can be easily priced and hedged even if prices can jump. Second, simple variance swaps supply a more accurate measure of market-imp...
Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini
2013-01-01
All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web
Variance bounding Markov chains
Roberts, Gareth O.; Jeffrey S. Rosenthal
2008-01-01
We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.
Institute of Scientific and Technical Information of China (English)
王宏健; 王晶; 曲丽萍; 刘振业
2013-01-01
The FastSLAM algorithm based on variance reduction of particle weight was presented in order to solve the decrease of estimated accuracy of AUV ( autonomous underwater vehicle) , location due to particles degeneracy and the sample impoverishment as a result of resampling in standard FastSLAM. The variance of particle weight was decreased by generating an adaptive exponential fading factor, which came from the thought of cooling function in simulated annealing. The effective particle number was increased by application of FastSLAM based on simulated annealing variance reduction in navigation and localization of AUV. Resampling in standard FastSLAM was replaced with it. Establish the kinematic model of AUV, feature model and measurement models of sensors, and make feature extraction with Hough transform. The experiment of AUV's simultaneous localization and mapping u-sing simulated annealing variance reduction FastSLAM was based on trial data. The results indicate that the method described in this paper maintains the diversity of the particles, however, weakens the degeneracy, while at the same time enhances the accuracy stability of AUV's navigation and localization system.%由于标准FastSLAM中存在粒子退化及重采样引起的粒子贫化,导致自主水下航行器(AUV)位置估计精度严重下降的问题,提出了一种基于粒子权值方差缩减的FastSLAM算法.利用模拟退火的降温函数产生自适应指数渐消因子来降低粒子权值的方差,进而增加有效粒子数,以此取代标准FastSLAM中的重采样步骤.建立AUV的运动学模型、特征模型及传感器的测量模型,通过霍夫变换进行特征提取.利用方差缩减FastSLAM算法,基于海试数据进行了AUV同步定位与构图仿真试验,结果表明所提方法能够保证粒子的多样性,并且降低粒子的退化程度,提高了AUV定位与地图构建系统的准确性及稳定性.
Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
Institute of Scientific and Technical Information of China (English)
WANG Zhi-hua; ZHOU Jun-hu; ZHANG Yan-wei; LU Zhi-min; FAN Jian-ren; CEN Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%～25% reburn heat input, temperature range from 1100 ℃ to 1400 ℃ and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 ℃ and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 ℃～1100 ℃. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.
Chabuda, Krzysztof; Leroux, Ian; Demkowicz-Dobrzanski, Rafal
2016-01-01
In atomic clocks, the frequency of a local oscillator is stabilized based on the feedback signal obtained by periodically interrogating an atomic reference system. The instability of the clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbi...
Institute of Scientific and Technical Information of China (English)
ZAYAS Pérez Teresa; GEISSLER Gunther; HERNANDEZ Fernando
2007-01-01
The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculatio and advanced oxidation processes(AOP)had been studied.The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H202,UVO3 and UV/H-H202/O3 processes was determined under acidic conditions.For each of these processes,different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater.Coffee wastewater is characterized by a high chemical oxygen demand(COD)and low total suspended solids.The outcomes of coffee wastewater reeatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD,color,and turbidity.It was found that a reductiOn in COD of 67%could be realized when the coffee wastewater was treated by chemical coagulation-flocculatlon witll lime and coagulant T-1.When coffee wastewater was treated by coagulation-flocculation in combination with UV/H202,a COD reduction of 86%was achieved,although only after prolonged UV irradiation.Of the three advanced oxidation processes considered,UV/H202,uv/03 and UV/H202/03,we found that the treatment with UV/H2O2/O3 was the most effective,with an efficiency of color,turbidity and further COD removal of 87%,when applied to the flocculated coffee wastewater.
Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando
2007-01-01
The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater. PMID:17918591
Impacts of natural organic matter on perchlorate removal by an advanced reduction process.
Duan, Yuhang; Batchelor, Bill
2014-01-01
Perchlorate can be destroyed by Advanced Reduction Processes (ARPs) that combine chemical reductants (e.g., sulfite) with activating methods (e.g., UV light) in order to produce highly reactive reducing free radicals that are capable of rapid and effective perchlorate reduction. However, natural organic matter (NOM) exists widely in the environment and has the potential to influence perchlorate reduction by ARPs that use UV light as the activating method. Batch experiments were conducted to obtain data on the impacts of NOM and wavelength of light on destruction of perchlorate by the ARPs that use sulfite activated by UV light produced by low-pressure mercury lamps (UV-L) or by KrCl excimer lamps (UV-KrCl). The results indicate that NOM strongly inhibits perchlorate removal by both ARP, because it competes with sulfite for UV light. Even though the absorbance of sulfite is much higher at 222 nm than that at 254 nm, the results indicate that a smaller amount of perchlorate was removed with the UV-KrCl lamp (222 nm) than with the UV-L lamp (254 nm). The results of this study will help to develop the proper way to apply the ARPs as practical water treatment processes. PMID:24521418
Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.
Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang
2016-05-01
In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. PMID:26996295
Minimum variance geographic sampling
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Conversations across Meaning Variance
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
International Nuclear Information System (INIS)
Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3–4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume − post-CRT tumor volume) × 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27–99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% ± 22.6%; range, 0–100%). Downstaging (ypT0–2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.
Institute of Scientific and Technical Information of China (English)
王宏健; 王晶; 曲丽萍; 刘振业
2013-01-01
If the system is nonlinear and the noise is non-Gaussian, the navigation accuracy of dead reckoning (DR) based on extended Kalman filtering (EKF) decreases seriously. In order to avoid it, a new dead reckoning based on particle filtering with variance reduction of weight is presented. The non-linear kinematic model of unmanned underwater vehicle (UUV) and measurement models of sensors are formulated. The variance of particles' weights are reduced with an adaptive exponential fading factor produced by cooling function in the simulated annealing algorithm, and thus the number of particles is increased. The previous method is used to replace resampling procedure in standard particle filtering algorithm. Simulation results with trial data show that compared with EKF based dead reckoning, the proposed method can avoid the influence of model linearization and non-Gaussian noise, and compared with particle filtering based dead reckoning, it reduces the degree of the particles impoverishment due to the resampling, and eventually enhances the stability and accuracy of UUV's navigation system.%针对基于扩展卡尔曼滤波(EKF)的航位推算(DR)在系统非线性、噪声非高斯情况下导航精度严重下降的问题,提出一种基于权值方差缩减粒子滤波的航位推算.建立无人水下航行器(UUV)的非线性运动学模型以及传感器的测量模型,利用模拟退火算法的退温函数产生自适应指数渐消因子以降低粒子权值的方差,进而增加有效粒子数,并以此替代标准粒子滤波中的重采样步骤.海试数据仿真试验表明,与基于EKF的航位推算算法相比,所设计算法避免了模型线性化、噪声非高斯的影响；与基于标准粒子滤波的航位推算相比,所设计算法降低了由于重采样导致的粒子贫化程度,从而提高了UUV导航系统的稳定性和精确性.
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.
2015-01-01
The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the
Spectral Ambiguity of Allan Variance
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Biclustering with heterogeneous variance.
Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R
2013-07-23
In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637
Ambiguity Aversion and Variance Premium
Jianjun Miao; Bin Wei; Hao Zhou
2012-01-01
This paper offers an ambiguity-based interpretation of variance premium - the differ- ence between risk-neutral and objective expectations of market return variance - as a com- pounding effect of both belief distortion and variance differential regarding the uncertain economic regimes. Our approach endogenously generates variance premium without impos- ing exogenous stochastic volatility or jumps in consumption process. Such a framework can reasonably match the mean variance premium as well a...
DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION
Energy Technology Data Exchange (ETDEWEB)
Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson
2002-02-01
The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.
Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.
Materials selection of surface coatings in an advanced size reduction facility
International Nuclear Information System (INIS)
A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests
Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector
Energy Technology Data Exchange (ETDEWEB)
Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.
2014-09-01
Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.
EPA RREL's mobile volume reduction unit advances soil washing at four Superfund sites
International Nuclear Information System (INIS)
Research testing of the US. Environmental Protection Agency (EPA) Risk Reduction Engineering Laboratory's (RREL) Volume Reduction Unit (VRU), produced data helping advance soil washing as a remedial technology for contaminated soils. Based on research at four Superfund sites, each with a different matrix of organic contaminants, EPA evaluated the soil technology and provided information to forecast realistic, full-scale remediation costs. Primarily a research tool, the VRU is RREL's mobile test unit for investigating the breadth of this technology. During a Superfund Innovative Technology Evaluation (SITE) Demonstration at Escambia Wood Treating Company Site, Pensacola, FL, the VRU treated soil contaminated with pentachlorophenol (PCP) and polynuclear aromatic hydrocarbon-laden creosote (PAH). At Montana Pole and Treatment Plant Site, Butte, MT, the VRU treated soil containing PCP mixed with diesel oil (measured as total petroleum hydrocarbons) and a trace of dioxin. At Dover Air Force Base Site, Dover, DE, the VRU treated soil containing JP-4 jet fuel, measured as TPHC. At Sand Creek Site, Commerce City, CO, the feed soil at this site was contaminated with two pesticides: heptachlor and dieldrin. Less than 10 percent of these pesticides remained in the treated coarse soil fractions
Saiyed, Naseem H.
2000-01-01
Contents of this presentation include: Advanced Subsonic Technology (AST) goals and general information; Nozzle nomenclature; Nozzle schematics; Photograph of all baselines; Configurations tests and types of data acquired; and Engine cycle and plug geometry impact on EPNL.
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Directory of Open Access Journals (Sweden)
Christelle Pau Ping Wong
2015-10-01
Full Text Available Textile industries consume large volumes of water for dye processing, leading to undesirable toxic dyes in water bodies. Dyestuffs are harmful to human health and aquatic life, and such illnesses as cholera, dysentery, hepatitis A, and hinder the photosynthetic activity of aquatic plants. To overcome this environmental problem, the advanced oxidation process is a promising technique to mineralize a wide range of dyes in water systems. In this work, reduced graphene oxide (rGO was prepared via an advanced chemical reduction route, and its photocatalytic activity was tested by photodegrading Reactive Black 5 (RB5 dye in aqueous solution. rGO was synthesized by dispersing the graphite oxide into the water to form a graphene oxide (GO solution followed by the addition of hydrazine. Graphite oxide was prepared using a modified Hummers’ method by using potassium permanganate and concentrated sulphuric acid. The resulted rGO nanoparticles were characterized using ultraviolet-visible spectrophotometry (UV-Vis, X-ray powder diffraction (XRD, Raman, and Scanning Electron Microscopy (SEM to further investigate their chemical properties. A characteristic peak of rGO-48 h (275 cm−1 was observed in the UV spectrum. Further, the appearance of a broad peak (002, centred at 2θ = 24.1°, in XRD showing that graphene oxide was reduced to rGO. Based on our results, it was found that the resulted rGO-48 h nanoparticles achieved 49% photodecolorization of RB5 under UV irradiation at pH 3 in 60 min. This was attributed to the high and efficient electron transport behaviors of rGO between aromatic regions of rGO and RB5 molecules.
Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili
2016-04-15
This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs. PMID:26815295
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Spectral variance of aeroacoustic data
Rao, K. V.; Preisser, J. S.
1981-01-01
An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.
Maximum Variance Hashing via Column Generation
Directory of Open Access Journals (Sweden)
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Mechanisms of advanced oxidation processing on bentonite consumption reduction in foundry.
Wang, Yujue; Cannon, Fred S; Komarneni, Sridhar; Voigt, Robert C; Furness, J C
2005-10-01
Prior full-scale foundry data have shown that when an advanced oxidation (AO) process is employed in a green sand system, the foundry needs 20-35% less makeup bentonite clay than when AO is not employed. We herein sought to explore the mechanism of this enhancement and found that AO water displaced the carbon coating of pyrolyzed carbonaceous condensates that otherwise accumulated on the bentonite surface. This was discerned by surface elemental analysis. This AO treatment restored the clay's capacity to adsorb methylene blue (as a measure of its surface charge) and water vapor (as a reflection of its hydrophilic character). In full-scale foundries, these parameters have been tied to improved green compressive strength and mold performance. When baghouse dust from a full-scale foundry received ultrasonic treatment in the lab, 25-30% of the dust classified into the clay-size fraction, whereas only 7% classified this way without ultrasonics. Also, the ultrasonication caused a size reduction of the bentonite due to the delamination of bentonite particles. The average bentonite particle diameter decreased from 4.6 to 3 microm, while the light-scattering surface area increased over 50% after 20 min ultrasonication. This would greatly improve the bonding efficiency of the bentonite according to the classical clay bonding mechanism. As a combined result of these mechanisms, the reduced bentonite consumption in full-scale foundries could be accounted for. PMID:16245849
Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
Advances of Ag, Cu, and Ag-Cu alloy nanoparticles synthesized via chemical reduction route
Energy Technology Data Exchange (ETDEWEB)
Tan, Kim Seah; Cheong, Kuan Yew, E-mail: cheong@eng.usm.my [Universiti Sains Malaysia, Electronic Materials Research Group, School of Materials and Mineral Resources Engineering (Malaysia)
2013-04-15
Silver (Ag) and copper (Cu) nanoparticles have shown great potential in variety applications due to their excellent electrical and thermal properties resulting high demand in the market. Decreasing in size to nanometer scale has shown distinct improvement in these inherent properties due to larger surface-to-volume ratio. Ag and Cu nanoparticles are also shown higher surface reactivity, and therefore being used to improve interfacial and catalytic process. Their melting points have also dramatically decreased compared with bulk and thus can be processed at relatively low temperature. Besides, regularly alloying Ag into Cu to create Ag-Cu alloy nanoparticles could be used to improve fast oxidizing property of Cu nanoparticles. There are varieties methods have been reported on the synthesis of Ag, Cu, and Ag-Cu alloy nanoparticles. This review aims to cover chemical reduction means for synthesis of those nanoparticles. Advances of this technique utilizing different reagents namely metal salt precursors, reducing agents, and stabilizers, as well as their effects on respective nanoparticles have been systematically reviewed. Other parameters such as pH and temperature that have been considered as an important factor influencing the quality of those nanoparticles have also been reviewed thoroughly.
Saedi, Mehdi; Wolk, Jared
2012-01-01
This paper compares a standard GARCH model with a Constant Elasticity of Variance GARCH model across three major currency pairs and the S&P 500 index. We discuss the advantages and disadvantages of using a more sophisticated model designed to estimate the variance of variance instead of assuming it to be a linear function of the conditional variance. The current stochastic volatility and GARCH analogues rest upon this linear assumption. We are able to confirm through empirical estimation ...
Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa
2005-03-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
Volatility investing with variance swaps
Härdle, Wolfgang Karl; Silyakova, Elena
2010-01-01
Traditionally volatility is viewed as a measure of variability, or risk, of an underlying asset. However recently investors began to look at volatility from a different angle. It happened due to emergence of a market for new derivative instruments - variance swaps. In this paper first we introduse the general idea of the volatility trading using variance swaps. Then we describe valuation and hedging methodology for vanilla variance swaps as well as for the 3-rd generation volatility derivativ...
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
Directory of Open Access Journals (Sweden)
Park TS
2015-07-01
Full Text Available Tai Sun Park,1 Yoonki Hong,2 Jae Seung Lee,1 Sang Young Oh,3 Sang Min Lee,3 Namkug Kim,3 Joon Beom Seo,3 Yeon-Mok Oh,1 Sang-Do Lee,1 Sei Won Lee1 1Department of Pulmonary and Critical Care Medicine and Clinical Research Center for Chronic Obstructive Airway Diseases, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea; 2Department of Internal Medicine, College of Medicine, Kangwon National University, Chuncheon, Korea; 3Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea Purpose: Endobronchial valve (EBV therapy is increasingly being seen as a therapeutic option for advanced emphysema, but its clinical utility in Asian populations, who may have different phenotypes to other ethnic populations, has not been assessed.Patients and methods: This prospective open-label single-arm clinical trial examined the clinical efficacy and the safety of EBV in 43 consecutive patients (mean age 68.4±7.5, forced expiratory volume in 1 second [FEV1] 24.5%±10.7% predicted, residual volume 208.7%±47.9% predicted with severe emphysema with complete fissure and no collateral ventilation in a tertiary referral hospital in Korea.Results: Compared to baseline, the patients exhibited significant improvements 6 months after EBV therapy in terms of FEV1 (from 0.68±0.26 L to 0.92±0.40 L; P<0.001, 6-minute walk distance (from 233.5±114.8 m to 299.6±87.5 m; P=0.012, modified Medical Research Council dyspnea scale (from 3.7±0.6 to 2.4±1.2; P<0.001, and St George’s Respiratory Questionnaire (from 65.59±13.07 to 53.76±11.40; P=0.028. Nine patients (20.9% had a tuberculosis scar, but these scars did not affect target lobe volume reduction or pneumothorax frequency. Thirteen patients had adverse events, ten (23.3% developed pneumothorax, which included one death due to tension pneumothorax.Conclusion: EBV therapy was as effective and safe in Korean
Variance Adjusted Actor Critic Algorithms
Tamar, Aviv; Mannor, Shie
2013-01-01
We present an actor-critic framework for MDPs where the objective is the variance-adjusted expected return. Our critic uses linear function approximation, and we extend the concept of compatible features to the variance-adjusted setting. We present an episodic actor-critic algorithm and show that it converges almost surely to a locally optimal point of the objective function.
Trotter, Michael A.; Hopkins, Peter M.
2014-01-01
Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitiga...
Boyages, John; Kastanias, Katrina; Koelmeyer, Louise A.; Winch, Caleb J.; Lam, Thomas C.; Sherman, Kerry A.; Munnoch, David Alex; Brorson, Håkan; Ngo, Quan D.; Heydon-White, Asha; Magnussen, John S.; Mackie, Helen
2015-01-01
Purpose This research describes and evaluates a liposuction surgery and multidisciplinary rehabilitation approach for advanced lymphedema of the upper and lower extremities. Methods A prospective clinical study was conducted at an Advanced Lymphedema Assessment Clinic (ALAC) comprised of specialists in plastic surgery, rehabilitation, imaging, oncology, and allied health, at Macquarie University, Australia. Between May 2012 and 31 May 2014, a total of 104 patients attended the ALAC. Eligibili...
Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet
URIBARRI, JAIME; WOODRUFF, SANDRA; Goodman, Susan; Cai, Weijing; Chen, Xue; Pyzik, Renata; YONG, ANGIE; STRIKER, GARY E.; Vlassara, Helen
2010-01-01
Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and...
Modelling volatility by variance decomposition
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the...... conditional and unconditional variances where the transition between regimes over time is smooth. The main focus is on the multiplicative decomposition that decomposes the variance into an unconditional and conditional component. A modelling strategy for the time-varying GARCH model based on the...... multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...
Variance approximation under balanced sampling
Deville, Jean-Claude; Tillé, Yves
2016-01-01
A balanced sampling design has the interesting property that Horvitz–Thompson estimators of totals for a set of balancing variables are equal to the totals we want to estimate, therefore the variance of Horvitz–Thompson estimators of variables of interest are reduced in function of their correlations with the balancing variables. Since it is hard to derive an analytic expression for the joint inclusion probabilities, we derive a general approximation of variance based on a residual technique....
Trotter, Michael A; Hopkins, Peter M
2014-11-01
Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitigate some of the risks and costs associated with surgery. Of these, endobronchial valve therapy is the most comprehensively studied although the presence of collateral ventilation in a significant proportion of patients has compromised its widespread utility. Bronchial thermal vapour ablation and lung volume reduction (LVR) coils are not dependent on collateral ventilation. These techniques have shown promise in early clinical trials; ongoing work will establish whether they have a role in the management of advanced COPD. Lung transplantation, although effective in selected patients for palliation of symptoms and improving survival, is limited by donor organ availability and economic constraint. Reconditioning marginal organs previously declined for transplantation with ex vivo lung perfusion (EVLP) is one potential strategy in improving the utilisation of donor organs. By increasing the donor pool, it is hoped lung transplantation might be more accessible for patients with advanced COPD into the future. PMID:25478204
Advanced RF-KO slow-extraction method for the reduction of spill ripple
Noda, K; Shibuya, S; Uesugi, T; Muramatsu, M; Kanazawa, M; Takada, E; Yamada, S
2002-01-01
Two advanced RF-knockout (RF-KO) slow-extraction methods have been developed at HIMAC in order to reduce the spill ripple for accurate heavy-ion cancer therapy: the dual frequency modulation (FM) method and the separated function method. As a result of simulations and experiments, it was verified that the spill ripple could be considerably reduced using these advanced methods, compared with the ordinary RF-KO method. The dual FM method and the separated function method bring about a low spill ripple within standard deviations of around 25% and of 15% during beam extraction within around 2 s, respectively, which are in good agreement with the simulation results.
Energy Technology Data Exchange (ETDEWEB)
Sorge, J.N. [Southern Co. Services, Inc., Birmingham, AL (United States); Menzies, B. [Radian Corp., Austin, TX (United States); Smouse, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Stallings, J.W. [Electric Power Research Inst., Palo Alto, CA (United States)
1995-09-01
Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi
2010-09-01
This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.
Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants
DEFF Research Database (Denmark)
Cao, Guangyu; Kosonen, Risto; Melikov, Arsen;
2016-01-01
The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...
Mesoscale Gravity Wave Variances from AMSU-A Radiances
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Energy Technology Data Exchange (ETDEWEB)
Littleton, Harry; Griffin, John
2011-07-31
This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).
Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market
Zhiguang Wang; Scott W. Fausti; Qasmi, Bashir A.
2010-01-01
We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 19...
External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator
Niedra, Janis M.; Geng, Steven M.
2013-01-01
Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.
Energy Technology Data Exchange (ETDEWEB)
Krakowski, R.A., Bathke, C.G.
1997-12-31
The potential for reducing plutonium inventories in the civilian nuclear fuel cycle through recycle in LWRs of a variety of mixed oxide forms is examined by means of a cost based plutonium flow systems model. This model emphasizes: (1) the minimization of separated plutonium; (2) the long term reduction of spent fuel plutonium; (3) the optimum utilization of uranium resources; and (4) the reduction of (relative) proliferation risks. This parametric systems study utilizes a globally aggregated, long term (approx. 100 years) nuclear energy model that interprets scenario consequences in terms of material inventories, energy costs, and relative proliferation risks associated with the civilian fuel cycle. The impact of introducing nonfertile fuels (NFF,e.g., plutonium oxide in an oxide matrix that contains no uranium) into conventional (LWR) reactors to reduce net plutonium generation, to increase plutonium burnup, and to reduce exo- reactor plutonium inventories also is examined.
Development of Head-end Pyrochemical Reduction Process for Advanced Oxide Fuels
Energy Technology Data Exchange (ETDEWEB)
Park, B. H.; Seo, C. S.; Hur, J. M.; Jeong, S. M.; Hong, S. S.; Choi, I. K.; Choung, W. M.; Kwon, K. C.; Lee, I. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-12-15
The development of an electrolytic reduction technology for spent fuels in the form of oxide is of essence to introduce LWR SFs to a pyroprocessing. In this research, the technology was investigated to scale a reactor up, the electrochemical behaviors of FPs were studied to understand the process and a reaction rate data by using U{sub 3}O{sub 8} was obtained with a bench scale reactor. In a scale of 20 kgHM/batch reactor, U{sub 3}O{sub 8} and Simfuel were successfully reduced into metals. Electrochemical characteristics of LiBr, LiI and Li{sub 2}Se were measured in a bench scale reactor and an electrolytic reduction cell was modeled by a computational tool.
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2016-05-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
International Nuclear Information System (INIS)
The purpose of the Advanced Alarm Processing (AAP) is to extract only the most important and the most relevant data out of large amount of available information. It should be noted that the integrity of the knowledge base is the most critical in developing a reliable AAP. This paper proposes a new approach to an AAP by using Event-Condition-Action(ECA) rules that can be automatically triggered by an active database. Also this paper proposed a knowledge acquisition method using data mining techniques to obtain the integrity of the alarm knowledge
Wang, Zhi-Hua; Zhou, Jun-hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-ren; Cen, Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiom...
Armor Possibilities and Radiographic Blur Reduction for The Advanced Hydrotest Facility
Energy Technology Data Exchange (ETDEWEB)
Hackett, M
2001-09-01
Currently at Lawrence Livermore National Laboratory (LLNL) a composite firing vessel is under development for the Advanced Hydrotest Facility (AHF) to study high explosives. This vessel requires a shrapnel mitigating layer to protect the vessel during experiments. The primary purpose of this layer is to protect the vessel, yet the material must be transparent to proton radiographs. Presented here are methods available to collect data needed before selection, along with a comparison tool developed to aid in choosing a material that offers the best of ballistic protection while allowing for clear radiographs.
Explaining the Variance of Price Dividend Ratios
Cochrane, John H.
1989-01-01
This paper presents a bound on the variance of the price-dividend ratio and a decomposition of the variance of the price-dividend ratio into components that reflect variation in expected future discount rates and variation in expected future dividend growth. Unobserved discount rates needed to make the variance bound and variance decomposition hold are characterized, and the variance bound and variance decomposition are tested for several discount rate models, including the consumption based ...
Advances of Model Order Reduction Research in Large-scale System Simulation
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be red...
Analysis of Variance: Variably Complex
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Comprehensive Study on the Estimation of the Variance Components of Traverse Nets
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.
López-Gatius, F; Hunter, R H F
2005-01-01
Twin pregnancies represent a management problem in dairy cattle since the risk of pregnancy loss increases, and the profitability of the herd diminishes drastically as the frequency of twin births increases. The aim of this study was to monitor the development of 211 twin pregnancies in high producing dairy cows in order to determine the best time for an embryo reduction approach. Pregnancy was diagnosed by transrectal ultrasonography between 36 and 42 days after insemination. Animals were then subjected to weekly ultrasound examination until Day 90 of gestation or until pregnancy loss. Viability was determined by monitoring the embryonic/fetal heartbeat until Day 50 of pregnancy, and then by heartbeat or fetal movement detection. Eighty-six cows (40.8%) bore bilateral and 125 (59.2%) unilateral twin pregnancies. Embryo death was registered in one of the two embryos in 35 cows (16.6%), 33 of them at pregnancy diagnosis. Pregnancy loss occurred in 22 of these cows between 1 and 4 weeks later. Thus, 13 (6.2% of the total animals) cows, carrying one dead of the two embryos, maintained gestation. Total pregnancy loss before Day 90 of pregnancy (mean 69 +/- 14 days) was registered in 51 (24.2%) cows: 7 (8%) of bilateral pregnancies and 44 (35.2%) of unilateral pregnancies, and it was higher (P = 0.0001) for both right (32.4%, 24/74) and left (39.2%, 20/51) unilateral than for bilateral (8.1%, 7/86) twin pregnancies. The single embryo death rate was significantly (P = 0.02) lower for cows with bilateral twins (9.3%, 8/86) than for total cows with unilateral twins (21.6%, 27/125). By way of overall conclusion, embryo reduction can occur in dairy cattle, and the practical perspective remains that most embryonic mortality in twins (one of the two embryos) occurs around Days 35-40 of gestation, the period when pregnancy diagnosis is generally performed and when embryo reduction could be tried.
Energy Technology Data Exchange (ETDEWEB)
Kawatoko, Toshiharu; Murai, Koichiro; Ibayashi, Setsurou; Tsuji, Hiroshi; Nomiyama, Kensuke; Sadoshima, Seizo; Eujishima, Masatoshi; Kuwabara, Yasuo; Ichiya, Yuichi (Kyushu Univ., Fukuoka (Japan). Faculty of Medicine)
1992-01-01
Regional cerebral blood flow (rCBF), cerebral metabolic rate of oxygen (rCMRO{sub 2}), and oxygen extraction fraction (rOEF) were measured using positron emission tomography (PET) in four patients with cirrhosis (two males and two females, aged 57 to 69 years) in comparison with those in five age matched controls with previous transient global amnesia. PET studies were carried out when the patients were fully alert and oriented after the episodes of encephalopathy. In the patients, rCBF tended to be lower, while rCMRO{sub 2} was significantly lowered in almost all hemisphere cortices, more markedly in the frontal cortex. Our results suggest that the brain oxygen metabolism is diffusely impaired in patients with advanced cirrhosis, and the frontal cortex seems to be more susceptible to the systemic metabolic derangements induced by chronic liver disease. (author).
Gate Leakage Current Reduction With Advancement of Graded Barrier AlGaN/GaN HEMT
Directory of Open Access Journals (Sweden)
Palash Das
2011-01-01
Full Text Available The gate leakage current reduction solution of AlGaN/GaN HEMT device issue has been addressed in this paper with compositional grading of AlGaN barrier layer. This work is also conjugated with the critical thickness limitation of heterostructure material growth. Hence, critical thickness calculation of AlGaN over GaN has been kept in special view. 1D Schrodinger and Poisson solver was used to calculate the 2DEG concentration and effective location to use it in the ATLAS device simulator for the predictions. The proposed Al0.50Ga0.50N/Al0.35Ga0.65N/Al0.20Ga0.80N/GaN HEMT structure exhibits the leakage current of the order of around 15 nA/mm at gate voltage of 1 V.
Wu, Renbing; Xue, Yanhong; Liu, Bo; Zhou, Kun; Wei, Jun; Chan, Siew Hwa
2016-10-01
Highly efficient and cost-effective electrocatalyst for the oxygen reduction reaction (ORR) is crucial for a variety of renewable energy applications. Herein, strongly coupled hybrid composites composed of cobalt diselenide (CoSe2) nanoparticles embedded within graphitic carbon polyhedra (GCP) as high-performance ORR catalyst have been rationally designed and synthesized. The catalyst is fabricated by a convenient method, which involves the simultaneous pyrolysis and selenization of preformed Co-based zeolitic imidazolate framework (ZIF-67). Benefiting from the unique structural features, the resulting CoSe2/GCP hybrid catalyst shows high stability and excellent electrocatalytic activity towards ORR (the onset and half-wave potentials are 0.935 and 0.806 V vs. RHE, respectively), which is superior to the state-of-the-art commercial Pt/C catalyst (0.912 and 0.781 V vs. RHE, respectively).
ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR
Energy Technology Data Exchange (ETDEWEB)
Robert S. Weber
1999-05-01
Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction
Energy Technology Data Exchange (ETDEWEB)
Smith, Aaron; Stehly, Tyler; Walter Musial
2015-09-29
2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.
Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe
2015-12-01
Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering. PMID:26348428
Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe
2015-12-01
Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering.
Advanced oxidation and reduction processes: Closed-loop applications for mixed waste
International Nuclear Information System (INIS)
At Los Alamos we are engaged in applying innovative oxidation and reduction technologies to the destruction of hazardous organics. Non thermal plasmas and relativistic electron-beams both involve the generation of free radicals and are applicable to a wide variety of mixed waste as closed-loop designs can be easily engineered. Silent discharge plasmas (SDP), long used for the generation of ozone, have been demonstrated in the laboratory to be effective in destroying hazardous organic compounds and offer an altemative to existing post-incineration and off-gas treatments. SDP generates very energetic electrons which efficiently create reactive free radicals, without adding the enthalpy associated with very high gas temperatures. A SDP cell has been used as a second stage to a LANL designed, packed-bed reactor (PBR) and has demonstrated DREs as high as 99.9999% for a variety of combustible liquid and gas-based waste streams containing scintillation fluids, nitrates, PCB surrogates, and both chlorinated and fluorinated solvents. Radiolytic treatment of waste using electron-beams and/or bremsstrahlung can be applied to a wide range of waste media (liquids, sludges, and solids). The efficacy and economy of these systems has been demonstrated for aqueous waste through both laboratory and pilot scale studies. We win present recent experimental and theoretical results for systems using stand alone SDP, combined PBR/SDP, and electron-beam treatment methods
Advanced and developmental technologies for treatment and volume reduction of dry active wastes
International Nuclear Information System (INIS)
The nuclear power industry processes Dry Active Wastes (DAW) to achieve cost-effective volume reduction and/or to produce a residue that is more compatible with final disposal criteria. The two principal processes currently used by the industry are compaction and incineration. Although incineration is often considered the process of choice, capital and operating cost are often high, and in some countries, public opposition and lengthy permitting processes result in expensive delays to bringing the process to operation. Therefore, alternative treatment options (mechanical, thermal, chemical, and biological) are being investigated to provide timely, cost-effective options for industry use. An overview of those developmental processes considered applicable to processing DAW is presented. In each category, open-quotes establishedclose quotes processes are mentioned and/or referenced, but the focus is on open-quotes potentialclose quotes technologies and the status of their development. The emphasis is on processing DAW, and therefore, those developmental processes that primarily treat solids in aqueous streams and melting/sintering technologies, both of lesser applicability to nuclear utility wastes, have been omitted. Included are those developmental technologies that appear to have a potential for radioactive waste application based on development on demonstration programs
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...
10 CFR 851.31 - Variance process.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Variance. 59.106 Section 59.106... Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... control may apply in writing to the Administrator for a temporary variance. The variance application...
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Variances. 59.206 Section 59.206... Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... control may apply in writing to the Administrator for a variance. The variance application shall...
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance based OFDM frame synchronization
Directory of Open Access Journals (Sweden)
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Estimating the Modified Allan Variance
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Vertical velocity variances and Reynold stresses at Brookhaven
DEFF Research Database (Denmark)
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....
Energy Technology Data Exchange (ETDEWEB)
Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.
1997-12-31
This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
Analytic variance estimates of Swank and Fano factors
Energy Technology Data Exchange (ETDEWEB)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-01-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
A Broadband Beamformer Using Controllable Constraints and Minimum Variance
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom;
2014-01-01
The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...... in a limited number of microphones. However, it may magnify noise that causes a lower output signal-to-noise ratio (SNR) than the MVDR beamformer. Contrarily, the MVDR beamformer suffers from interference in output. In this paper, we propose a controllable LCMV (C-LCMV) beamformer based on the principles...
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
Variance risk premia in energy commodities
Trolle, Anders; Eduardo S. Schwartz
2010-01-01
This paper investigates variance risk premia in energy commodities, particularly crude oil and natural gas, using a robust model-independent approach. Over a period of 11 years, we find that the average variance risk premia are significantly negative for both energy commodities. However, it is difficult to explain the level and variation in energy variance risk premia with systematic or commodity specific factors. The return profile of a natural gas variance swap resembles that of a call opti...
Mei, S; Tonry, J L; Jordan, A; Peng, E W; Côté, P; Ferrarese, L; Merritt, D; Milosavljevic, M; West, M J; Mei, Simona; Blakeslee, John P.; Tonry, John L.; Jordan, Andres; Peng, Eric W.; Cote, Patrick; Ferrarese, Laura; Merritt, David; Milosavljevic, Milos; West, Michael J.
2005-01-01
The Advanced Camera for Surveys (ACS) Virgo Cluster Survey is a large program to image 100 early-type Virgo galaxies using the F475W and F850LP bandpasses of the Wide Field Channel of the ACS instrument on the Hubble Space Telescope (HST). The scientific goals of this survey include an exploration of the three-dimensional structure of the Virgo Cluster and a critical examination of the usefulness of the globular cluster luminosity function as a distance indicator. Both of these issues require accurate distances for the full sample of 100 program galaxies. In this paper, we describe our data reduction procedures and examine the feasibility of accurate distance measurements using the method of surface brightness fluctuations (SBF) applied to the ACS Virgo Cluster Survey F850LP imaging. The ACS exhibits significant geometrical distortions due to its off-axis location in the HST focal plane; correcting for these distortions by resampling the pixel values onto an undistorted frame results in pixel correlations tha...
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
40 CFR 142.41 - Variance request.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a permanent, structural variance from a specific standard(s) in...
40 CFR 52.2183 - Variance provision.
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
Natural Exponential Families with Quadratic Variance Functions
Morris, Carl N.
1982-01-01
The normal, Poisson, gamma, binomial, and negative binomial distributions are univariate natural exponential families with quadratic variance functions (the variance is at most a quadratic function of the mean). Only one other such family exists. Much theory is unified for these six natural exponential families by appeal to their quadratic variance property, including infinite divisibility, cumulants, orthogonal polynomials, large deviations, and limits in distribution.
Variance optimal stopping for geometric Levy processes
DEFF Research Database (Denmark)
Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund
2015-01-01
The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...
Global variance reduction for Monte Carlo reactor physics calculations
International Nuclear Information System (INIS)
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 103-105 times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Energy Technology Data Exchange (ETDEWEB)
Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
2010-02-08
... Paperwork Reduction Act of 1995 (44 U.S.C. 3506 et seq.) and Secretary of Labor's Order No. 5-2007 (72 FR... Occupational Safety and Health Administration Information Collection Requirements for the Variance Regulations..., experimental, permanent, and national defense variances. DATES: Comments must be submitted...
Linear Minimum variance estimation fusion
Institute of Scientific and Technical Information of China (English)
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Generalized analysis of molecular variance.
Directory of Open Access Journals (Sweden)
Caroline M Nievergelt
2007-04-01
Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by
Seasonal variance in P system models for metapopulations
Institute of Scientific and Technical Information of China (English)
Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri
2007-01-01
Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
On testing variance components in ANOVA models
Hartung, Joachim; Knapp, Guido
2000-01-01
In this paper we derive asymptotic x 2 - tests for general linear hypotheses on variance components using repeated variance components models. In two examples, the two-way nested classification model and the two-way crossed classification model with interaction, we explicitly investigate the properties of the asymptotic tests in small sample sizes.
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF... Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
2010-01-01
... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...
Nonlinear epigenetic variance: review and simulations
K.J. Kan; A. Ploeger; M.E.J. Raijmakers; C.V. Dolan; H.L.J. van der Maas
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addit
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...... genetic variance. However, in Holstein cattle, a group of genes that explained close to none of the genetic variance could also have a high likelihood ratio. This is still a good separation of signal and noise, but instead of capturing the genetic signal in the marker set being tested, we are instead...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Variance components for body weight in Japanese quails (Coturnix japonica
Directory of Open Access Journals (Sweden)
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
On Normal Variance-Mean Mixtures
Yu, Yaming
2011-01-01
Normal variance-mean mixtures encompass a large family of useful distributions such as the generalized hyperbolic distribution, which itself includes the Student t, Laplace, hyperbolic, normal inverse Gaussian, and variance gamma distributions as special cases. We study shape properties of normal variance-mean mixtures, in both the univariate and multivariate cases, and determine conditions for unimodality and log-concavity of the density functions. This leads to a short proof of the unimodality of all generalized hyperbolic densities. We also interpret such results in practical terms and discuss discrete analogues.
A Variance Based Active Learning Approach for Named Entity Recognition
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
The term structure of variance swap rates and optimal variance swap investments
Egloff, Daniel; Leippold, Markus; Liuren WU
2010-01-01
This paper performs specification analysis on the term structure of variance swap rates on the S&P 500 index and studies the optimal investment decision on the variance swaps and the stock index. The analysis identifies two stochastic variance risk factors, which govern the short and long end of the variance swap term structure variation, respectively. The highly negative estimate for the market price of variance risk makes it optimal for an investor to take short positions in a short-term va...
Bierbaum, S; Öller, H-J; Kersten, A; Klemenčič, A Krivograd
2014-01-01
Ozone (O(3)) has been used successfully in advanced wastewater treatment in paper mills, other sectors and municipalities. To solve the water problems of regions lacking fresh water, wastewater treated by advanced oxidation processes (AOPs) can substitute fresh water in highly water-consuming industries. Results of this study have shown that paper strength properties are not impaired and whiteness is slightly impaired only when reusing paper mill wastewater. Furthermore, organic trace compounds are becoming an issue in the German paper industry. The results of this study have shown that AOPs are capable of improving wastewater quality by reducing organic load, colour and organic trace compounds.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
DEFF Research Database (Denmark)
Niebuhr, Oliver
2016-01-01
Managing and, ideally, explaining phonetic variation has ever since been a key issue in the speech sciences. In this context, the major contribution of Lindblom's H&H theory was to replace the futile search for invariance by an explainable variance based on the tug-of-war metaphor. Recent empirical...... evidence on articulatory prosodies and the involvement of reduction in conveying communication functions both suggest the next steps along the line of argument opened up by Lindblom. Specifically, we need to supplement Lindblom's explanatory framework and revise the speaker-listener conflict that lies...
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
International Nuclear Information System (INIS)
The spatial variation of radon concentration within the building of the preparatory year located in Riyadh was studied. Nuclear track detectors (CR-39) were used to measure radon concentration for two consecutive six month periods in more than 40 rooms of the surveyed building. Coefficient of variation (CV) was calculated as a measure of relative variation of radon concentration between floors and between rooms on the same floor. Floor mean ratios, with ground floor as a reference level, were calculated also in order to study the correlation between radon concentration and floor levels in case of using advanced Italian granite building material. All the results of this study were investigated and compared with usual Indian granite building material and it was found that the knowledgement buildingis a healthy work place which may be due to uses of advanced building materials.
Singh, R.; Mahajan, V.
2014-07-01
In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.
[ADVANCE-ON Trial; How to Achieve Maximum Reduction of Mortality in Patients With Type 2 Diabetes].
Kanorskiĭ, S G
2015-01-01
Of 10,261 patients with type 2 diabetes who survived to the end of a randomized ADVANCE trial 83% were included in the ADVANCE-ON project for observation for 6 years. The difference in the level of blood pressure which had been achieved during 4.5 years of within trial treatment with fixed perindopril/indapamide combination quickly vanished but significant decrease of total and cardiovascular mortality in the group of patients treated with this combination for 4.5 years was sustained during 6 years of post-trial follow-up. The results can be related to gradually weakening protective effect of perindopril/indapamide combination on cardiovascular system, and are indicative of the expedience of long-term use of this antihypertensive therapy for maximal lowering of mortality of patients with diabetes. PMID:26164995
Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise
Institute of Scientific and Technical Information of China (English)
Donghui Li; Li Guo
2006-01-01
@@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke J.; Obermaier, Harald; Joy, Kenneth
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Reduced Variance for Material Sources in Implicit Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, Todd J. [Los Alamos National Laboratory
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
42 CFR 456.522 - Content of request for variance.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.522 Content of request for variance. The agency's request for a variance must include—...
Inhomogeneity-induced variance of cosmological parameters
Wiegand, Alexander
2011-01-01
Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters t...
Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; Santos, Luciana Urbano dos
2014-01-01
This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ=254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number ...
Estimating quadratic variation using realized variance
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a...... rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...... to daily, data. Our findings suggest that temporal variation in both risk-aversion and volatility-risk play an important role in determining stock market returns....
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Inhomogeneity-induced variance of cosmological parameters
Wiegand, A.; Schwarz, D. J.
2012-02-01
Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.
Genomic prediction of breeding values using previously estimated SNP variances
Calus, M.P.L.; Schrooten, C.; Veerkamp, R.F.
2014-01-01
Background Genomic prediction requires estimation of variances of effects of single nucleotide polymorphisms (SNPs), which is computationally demanding, and uses these variances for prediction. We have developed models with separate estimation of SNP variances, which can be applied infrequently, and
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander;
2013-01-01
variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
Energy Technology Data Exchange (ETDEWEB)
O' Connor, Patrick [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rugani, Kelsey [Kearns & West, Inc., San Francisco, CA (United States); West, Anna [Kearns & West, Inc., San Francisco, CA (United States)
2016-03-01
On behalf of the U.S. Department of Energy (DOE) Wind and Water Power Technology Office (WWPTO), Oak Ridge National Laboratory (ORNL), hosted a day and half long workshop on November 5 and 6, 2015 in the Washington, D.C. metro area to discuss cost reduction opportunities in the development of hydropower projects. The workshop had a further targeted focus on the costs of small, low-head1 facilities at both non-powered dams (NPDs) and along undeveloped stream reaches (also known as New Stream-Reach Development or “NSD”). Workshop participants included a cross-section of seasoned experts, including project owners and developers, engineering and construction experts, conventional and next-generation equipment manufacturers, and others to identify the most promising ways to reduce costs and achieve improvements for hydropower projects.
The Variance of Language in Different Contexts
Institute of Scientific and Technical Information of China (English)
申一宁
2012-01-01
language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.
Formative Use of Intuitive Analysis of Variance
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Strengthened Chernoff-type variance bounds
Afendras, G.; Papadatos, N.
2014-01-01
Let $X$ be an absolutely continuous random variable from the integrated Pearson family and assume that $X$ has finite moments of any order. Using some properties of the associated orthonormal polynomial system, we provide a class of strengthened Chernoff-type variance bounds.
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...
LOCAL MEDIAN ESTIMATION OF VARIANCE FUNCTION
Institute of Scientific and Technical Information of China (English)
杨瑛
2004-01-01
This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.
ROBUST ESTIMATION OF VARIANCE COMPONENTS MODEL
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
Classical least squares estimation consists of minimizing the sum of the squared residuals of observation. Many authors have produced more robust versions of this estimation by replacing the square by something else, such as the absolute value. These approaches have been generalized, and their robust estimations and influence functions of variance components have been presented. The results may have wide practical and theoretical value.
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with addit...
Lorenz Dominance and the Variance of Logarithms.
Ok, Efe A.; Foster, James
1997-01-01
The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible.
Variance Component Testing in Multilevel Models
Berkhof, J.; Snijders, T.A.B.
2001-01-01
Available variance component tests are reviewed and three new score tests are presented In the first score test, the asymptotic normal distribution of the test statistic is used as a reference distribution. In the other two score tests, a Satterthwaite approximation is used for the null distribution
Bias and variance in continuous EDA
Teytaud, Fabien; Teytaud, Olivier
2009-01-01
Estimation of Distribution Algorithms are based on statistical estimates. We show that when combining classical tools from statistics, namely bias/variance decomposition, reweighting and quasi-randomization, we can strongly improve the convergence rate. All modiﬁcations are easy, compliant with most algorithms, and experimentally very eﬃcient in particular in the parallel case (large oﬀsprings).
Directory of Open Access Journals (Sweden)
Derk-Jan Dijk
Full Text Available BACKGROUND: The phase and amplitude of rhythms in physiology and behavior are generated by circadian oscillators and entrained to the 24-h day by exposure to the light-dark cycle and feedback from the sleep-wake cycle. The extent to which the phase and amplitude of multiple rhythms are similarly affected during altered timing of light exposure and the sleep-wake cycle has not been fully characterized. METHODOLOGY/PRINCIPAL FINDINGS: We assessed the phase and amplitude of the rhythms of melatonin, core body temperature, cortisol, alertness, performance and sleep after a perturbation of entrainment by a gradual advance of the sleep-wake schedule (10 h in 5 days and associated light-dark cycle in 14 healthy men. The light-dark cycle consisted either of moderate intensity 'room' light (∼90-150 lux or moderate light supplemented with bright light (∼10,000 lux for 5 to 8 hours following sleep. After the advance of the sleep-wake schedule in moderate light, no significant advance of the melatonin rhythm was observed whereas, after bright light supplementation the phase advance was 8.1 h (SEM 0.7 h. Individual differences in phase shifts correlated across variables. The amplitude of the melatonin rhythm assessed under constant conditions was reduced after moderate light by 54% (17-94% and after bright light by 52% (range 12-84%, as compared to the amplitude at baseline in the presence of a sleep-wake cycle. Individual differences in amplitude reduction of the melatonin rhythm correlated with the amplitude of body temperature, cortisol and alertness. CONCLUSIONS/SIGNIFICANCE: Alterations in the timing of the sleep-wake cycle and associated bright or moderate light exposure can lead to changes in phase and reduction of circadian amplitude which are consistent across multiple variables but differ between individuals. These data have implications for our understanding of circadian organization and the negative health outcomes associated with shift
A study on effect of point-of-use filters on defect reduction for advanced 193nm processes
Vitorino, Nelson; Wolfer, Elizabeth; Cao, Yi; Lee, DongKwan; Wu, Aiwen
2009-03-01
Bottom Anti-Reflective Coatings (BARCs) have been widely used in the lithography process for decades. BARCs play important roles in controlling reflections and therefore improving swing ratios, CD variations, reflective notching, and standing waves. The implementation of BARC processes in 193nm dry and immersion lithography has been accompanied by defect reduction challenges on fine patterns. Point-of-Use filters are well known among the most critical components on a track tool ensuring low wafer defects by providing particle-free coatings on wafers. The filters must have very good particle retention to remove defect-causing particulate and gels while not altering the delicate chemical formulation of photochemical materials. This paper describes a comparative study of the efficiency and performance of various Point-of-Use filters in reducing defects observed in BARC materials. Multiple filter types with a variety of pore sizes, membrane materials, and filter designs were installed on an Entegris Intelligent(R) Mini dispense pump which is integrated in the coating module of a clean track. An AZ(R) 193nm organic BARC material was spin-coated on wafers through various filter media. Lithographic performance of filtered BARCs was examined and wafer defect analysis was performed. By this study, the effect of filter properties on BARC process related defects can be learned and optimum filter media and design can be selected for BARC material to yield the lowest defects on a coated wafer.
Zhang, Wei; Xu, Zhenyu; Lours, Michel; Boudot, Rodolphe; Kersalé, Yann; Luiten, Andre N; Le Coq, Yann; Santarelli, Giorgio
2011-05-01
We report what we believe to be the lowest phase noise optical-to-microwave frequency division using fiber-based femtosecond optical frequency combs: a residual phase noise of -120 dBc/Hz at 1 Hz offset from an 11.55 GHz carrier frequency. Furthermore, we report a detailed investigation into the fundamental noise sources which affect the division process itself. Two frequency combs with quasi-identical configurations are referenced to a common ultrastable cavity laser source. To identify each of the limiting effects, we implement an ultra-low noise carrier-suppression measurement system, which avoids the detection and amplification noise of more conventional techniques. This technique suppresses these unwanted sources of noise to very low levels. In the Fourier frequency range of ∼200 Hz to 100 kHz, a feed-forward technique based on a voltage-controlled phase shifter delivers a further noise reduction of 10 dB. For lower Fourier frequencies, optical power stabilization is implemented to reduce the relative intensity noise which causes unwanted phase noise through power-to-phase conversion in the detector. We implement and compare two possible control schemes based on an acousto-optical modulator and comb pump current. We also present wideband measurements of the relative intensity noise of the fiber comb. PMID:21622045
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-31
The team of Arthur D. Little, Tufts University and Engelhard Corporation are conducting Phase 1 of a four and a half year, two-phase effort to develop and scale-up an advanced byproduct recovery technology that is a direct, single-stage, catalytic process for converting sulfur dioxide to elemental sulfur. This catalytic process reduces SO{sub 2} over a fluorite-type oxide (such as ceria and zirconia). The catalytic activity can be significantly promoted by active transition metals, such as copper. More than 95% elemental sulfur yield, corresponding to almost complete sulfur dioxide conversion, was obtained over a Cu-Ce-O oxide catalyst as part of an on-going DOE-sponsored, University Coal Research Program. This type of mixed metal oxide catalyst has stable activity, high selectivity for sulfur production, and is resistant to water and carbon dioxide poisoning. Tests with CO and CH{sub 4} reducing gases indicate that the catalyst has the potential for flexibility with regard to the composition of the reducing gas, making it attractive for utility use. The performance of the catalyst is consistently good over a range of SO{sub 2} inlet concentration (0.1 to 10%) indicating its flexibility in treating SO{sub 2} tail gases as well as high concentration streams. The principal objective of the Phase 1 program is to identify and evaluate the performance of a catalyst which is robust and flexible with regard to choice of reducing gas. In order to achieve this goal, the authors have planned a structured program including: Market/process/cost/evaluation; Lab-scale catalyst preparation/optimization studies; Lab-scale, bulk/supported catalyst kinetic studies; Bench-scale catalyst/process studies; and Utility review. Progress is reported from all three organizations.
10 CFR 851.32 - Action on variance requests.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Action on variance requests. 851.32 Section 851.32 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a... approval of a variance application, the Chief Health, Safety and Security Officer must forward to the...
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a... and Application § 50-204.1a Variances. (a) Variances from standards in this part may be granted in the same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the...
A Critical Note on the Forecast Error Variance Decomposition
Seymen, Atilim
2008-01-01
The paper questions the reasonability of using forecast error variance decompositions for assessing the role of different structural shocks in business cycle fluctuations. It is shown that the forecast error variance decomposition is related to a dubious definition of the business cycle. A historical variance decomposition approach is proposed to overcome the problems related to the forecast error variance decomposition.
42 CFR 456.525 - Request for renewal of variance.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...
42 CFR 456.521 - Conditions for granting variance requests.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Conditions for granting variance requests. 456.521..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.521 Conditions for granting variance requests. (a) Except as described under paragraph...
Realized Variance and Market Microstructure Noise
DEFF Research Database (Denmark)
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792
The Theory of Variances in Equilibrium Reconstruction
Energy Technology Data Exchange (ETDEWEB)
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Systems Engineering Programmatic Estimation Using Technology Variance
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Fractional constant elasticity of variance model
Ngai Hang Chan; Chi Tim Ng
2007-01-01
This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
The Theory of Variances in Equilibrium Reconstruction
International Nuclear Information System (INIS)
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature
Directional variance analysis of annual rings
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Radtke, Gregg A; Hadjiconstantinou, Nicolas G
2009-05-01
We present an efficient variance-reduced particle simulation technique for solving the linearized Boltzmann transport equation in the relaxation-time approximation used for phonon, electron, and radiative transport, as well as for kinetic gas flows. The variance reduction is achieved by simulating only the deviation from equilibrium. We show that in the limit of small deviation from equilibrium of interest here, the proposed formulation achieves low relative statistical uncertainty that is also independent of the magnitude of the deviation from equilibrium, in stark contrast to standard particle simulation methods. Our results demonstrate that a space-dependent equilibrium distribution improves the variance reduction achieved, especially in the collision-dominated regime where local equilibrium conditions prevail. We also show that by exploiting the physics of relaxation to equilibrium inherent in the relaxation-time approximation, a very simple collision algorithm with a clear physical interpretation can be formulated. PMID:19518597
The Parabolic variance (PVAR), a wavelet variance based on least-square fit
Vernotte, F; Bourgeois, P -Y; Rubiola, E
2015-01-01
The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-09-01
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NOx combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NOx burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NOx reductions of each technology and evaluate the effects of those reductions on other combustion parameters. Results are described.
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
A relation between information entropy and variance
Pandey, Biswajit
2016-01-01
We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon
2015-01-01
Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis.
A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del
Institute of Scientific and Technical Information of China (English)
Hou Ying-li; Liu Guo-xin; Jiang Chun-lan
2015-01-01
In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.
Estimators for variance components in structured stair nesting models
Monteiro, Sandra; Fonseca, Miguel; Carvalho, Francisco
2016-06-01
The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators.
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available...... variance. Our findings suggest that the empirical path of quadratic variation is also estimated better with the realized range-based variance....
Inheritance beyond plain heritability : variance controlling genes in Arabidopsis thaliana
Xia Shen; Mats Pettersson; Lars Rönnegård; Örjan Carlborg
2012-01-01
Author Summary The most well-studied effects of genes are those leading to different phenotypic means for alternative genotypes. A less well-explored type of genetic control is that resulting in a heterogeneity in variance between genotypes. Here, we reanalyze a publicly available Arabidopsis thaliana GWAS dataset to detect genetic effects on the variance heterogeneity, and our results indicate that the environmental variance is under extensive genetic control by a large number of variance-co...
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
The VIX, the Variance Premium and Stock Market Volatility
Bekaert, Geert; Hoerova, Marie
2013-01-01
We decompose the squared VIX index, derived from US S&P500; options prices, into the conditional variance of stock returns and the equity variance premium. We evaluate a plethora of state-of-the-art volatility forecasting models to produce an accurate measure of the conditional variance. We then examine the predictive power of the VIX and its two components for stock market returns, economic activity and financial instability. The variance premium predicts stock returns while the conditional ...
The value of travel time variance
DEFF Research Database (Denmark)
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Power Estimation in Multivariate Analysis of Variance
Directory of Open Access Journals (Sweden)
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
The variance of the adjusted Rand index.
Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence
2016-06-01
For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693
Variance-based interaction index measuring heteroscedasticity
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
40 CFR 142.43 - Disposition of a variance request.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Disposition of a variance request. 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.43 Disposition of a variance request. (a) If...
Semiparametric bounds of mean and variance for exotic options
Institute of Scientific and Technical Information of China (English)
2009-01-01
Finding semiparametric bounds for option prices is a widely studied pricing technique.We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic(Collar and Gap) call options given mean and variance information on the underlying asset price.Mathematically,we extended domination technique by quadratic functions to bound mean and variances.
40 CFR 52.1390 - Missoula variance provision.
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was...
Semiparametric bounds of mean and variance for exotic options
Institute of Scientific and Technical Information of China (English)
LIU GuoQing; LI V.Wenbo
2009-01-01
Finding semiparametric bounds for option prices is a widely studied pricing technique. We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic (Collar and Gap) call options given mean and variance information on the underlying asset price. Mathematically, we extended domination technique by quadratic functions to bound mean and variances.
40 CFR 142.42 - Consideration of a variance request.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Consideration of a variance request... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.42 Consideration of a variance request. (a)...
40 CFR 142.40 - Requirements for a variance.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Requirements for a variance. 142.40... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.40 Requirements for a variance. (a) The Administrator may...
31 CFR 10.67 - Proof; variance; amendment of pleadings.
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...
20 CFR 901.40 - Proof; variance; amendment of pleadings.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...
Variance gamma process simulation and it's parameters estimation
Kuzmina, A. V.
2010-01-01
Variance gamma process is a three parameter process. Variance gamma process is simulated as a gamma time-change Brownian motion and as a difference of two independent gamma processes. Estimations of simulated variance gamma process parameters are presented in this paper.
The Effect of Selection on the Phenotypic Variance
Shnol, E.E.; Kondrashov, A S
1993-01-01
We consider the within-generation changes of phenotypic variance caused by selection w(x) which acts on a quantitative trait x. If before selection the trait has Gaussian distribution, its variance decreases if the second derivative of the logarithm of w(x) is negative for all x, while if it is positive for all x, the variance increases.
40 CFR 59.509 - Can I get a variance?
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... its reasonable control may apply in writing to the Administrator for a temporary variance....
31 CFR 8.59 - Proof; variance; amendment of pleadings.
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
Ozol-Godfrey, Ayca
2004-01-01
Graphical summaries are becoming important tools for evaluating designs. The need to compare designs in term of their prediction variance properties advanced this development. A recent graphical tool, the Fraction of Design Space plot, is useful to calculate the fraction of the design space where the scaled prediction variance (SPV) is less than or equal to a given value. In this dissertation we adapt FDS plots, to study three specific design problems: robustness to model assumptions, robustn...
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera
Marchitto, T. M.; Grist, H. R.; van Geen, A.
2013-12-01
Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.
Cyclostationary analysis with logarithmic variance stabilisation
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Cancela, Héctor; El Khadiri, Mohamed; Rubino, Gerardo; Tuffin, Bruno
2015-01-01
International audience Exact evaluation of static network reliability parameters belongs to the NP-hard family and Monte Carlo simulation is therefore a relevant tool to provide their estimations. The first goal of this paper is to review a Recursive Variance Reduction (RVR) estimator which approaches the unreliability by recursively reducing the graph from the random choice of the first working link on selected cuts. We show that the method does not verify the bounded relative error (BRE)...
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Directory of Open Access Journals (Sweden)
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
40 CFR 190.11 - Variances for unusual operations.
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-07-01
This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{sub x} burners, advanced overfire systems, and digital control system.
Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System
Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.
2016-06-01
Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading
Energy Technology Data Exchange (ETDEWEB)
1992-04-21
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as parameters such as particulate characteristics and boiler efficiency.
Energy Technology Data Exchange (ETDEWEB)
1992-04-21
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as parameters such as particulate characteristics and boiler efficiency.
DEFF Research Database (Denmark)
Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker;
2007-01-01
The objective of this study was to test the hypothesis that the environmental variance of sternopleural bristle number in Drosophila melanogaster is partly under genetic control. We used data from 20 inbred lines and 10 control lines to test this hypothesis. Two models were used: a standard...... quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
A Variance Explanation Paradox: When a Little Is a Lot.
Abelson, Robert P.
1985-01-01
Argues that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. An example is the computation of percentage of variance in batting performance among major league baseball players. (Author/CB)
An Analysis of Variance Framework for Matrix Sampling.
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
29 CFR 1904.38 - Variances from the recordkeeping rule.
2010-07-01
... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...
Time variance effects and measurement error indications for MLS measurements
DEFF Research Database (Denmark)
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
Productive Failure in Learning the Concept of Variance
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Confidence Intervals of Variance Functions in Generalized Linear Model
Institute of Scientific and Technical Information of China (English)
Yong Zhou; Dao-ji Li
2006-01-01
In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.
Research on variance of subnets in network sampling
Institute of Scientific and Technical Information of China (English)
Qi Gao; Xiaoting Li; Feng Pan
2014-01-01
In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Variance After-Effects Distort Risk Perception in Humans.
Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel
2016-06-01
In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500
Modeling variance structure of body shape traits of Lipizzan horses.
Kaps, M; Curik, I; Baban, M
2010-09-01
Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured
Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model
Leunglung Chan; Eckhard Platen
2015-01-01
This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.
Sinha, Neeraj
2014-01-01
This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
International Nuclear Information System (INIS)
The reduction of neptunium and uranium was studied using a flow type electrolytic cell containing a carbon-fiber column electrode. Np(VI) (10-3 mol·l-1) in 3 mol·l-1 HNO3 solution was quantitatively reduced into Np(V) at the potential of 0.3 V vs. Ag/AgCl using the cell. Reduction of U(VI) (0.1 mol·l-1) into U(IV) with co-existing Np and Tc at -0.3 V vs. Ag/AgCl in 6 mol·l-1 HNO3 solution was also demonstrated. (author)
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
Variance estimation in neutron coincidence counting using the bootstrap method
Energy Technology Data Exchange (ETDEWEB)
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E
2016-01-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available......, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We solve this...... problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared with realized...
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
2007-01-01
We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is...... available, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of the realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We...... solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
A new definition of nonlinear statistics mean and variance
Chen, W.,
1999-01-01
This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.
Wavelet Variance Analysis of EEG Based on Window Function
Institute of Scientific and Technical Information of China (English)
ZHENG Yuan-zhuang; YOU Rong-yi
2014-01-01
A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.
Divdends and Equity Prices: The Variance Trade Off
Margaret Bray; Giovanni Marseguerra
2002-01-01
This paper shows that standard corporate finance theory implies that there is potentially a trade off between the variances of dividends and equity prices. We show how the trade off works in a stochastic difference equation model of dividend policy demonstrating that the solution may be unstable for plausible parameter values. At the boundary of the feasible set of price and dividend variances, prices and dividends are perfectly correlated and both follow an AR(1) process. We calculate explic...
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case
Wild bootstrap of the mean in the infinite variance case
Giuseppe Cavaliere; Iliyan Georgiev; Robert Taylor, A. M.
2011-01-01
It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (alfa-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact ...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Occupancy, spatial variance, and the abundance of species
He, F.; Gaston, K J
2003-01-01
A notable and consistent ecological observation known for a long time is that spatial variance in the abundance of a species increases with its mean abundance and that this relationship typically conforms well to a simple power law (Taylor 1961). Indeed, such models can be used at a spectrum of spatial scales to describe spatial variance in the abundance of a single species at different times or in different regions and of different species across the same set of areas (Tayl...
A characterization of Poisson-Gaussian families by generalized variance
Kokonendji, Célestin C.; Masmoudi, Afif
2006-01-01
We show that if the generalized variance of an infinitely divisible natural exponential family [math] in a [math] -dimensional linear space is of the form [math] , then there exists [math] in [math] such that [math] is a product of [math] univariate Poisson and ( [math] )-variate Gaussian families. In proving this fact, we use a suitable representation of the generalized variance as a Laplace transform and the result, due to Jörgens, Calabi and Pogorelov, that any strictly convex smooth funct...
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.
2011-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
A Monte Carlo Study of Seven Homogeneity of Variance Tests
Directory of Open Access Journals (Sweden)
Howard B. Lee
2010-01-01
Full Text Available Problem statement: The decision by SPSS (now PASW to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1 Levene, (2 modified Levene, (3 Z-variance, (4 Overall-Woodward Modified Z-variance, (5 OBrien, (6 Samiuddin Cube Root and (7 F-Max. Approach: Each test was subjected to Monte Carlo analysis through different shaped distributions: (1 normal, (2 platykurtic, (3 leptokurtic, (4 moderate skewed and (5 highly skewed. The Levene Test is the one used in all of the latest versions of SPSS. Results: The results from these studies showed that the Levene Test is neither the best nor worst in terms of robustness and power. However, the modified Levene Test showed very good robustness when compared to the other tests but lower power than other tests. The Samiuddin test is at its best in terms of robustness and power when the distribution is normal. The results of this study showed the strengths and weaknesses of the seven tests. Conclusion/Recommendations: No single test outperformed the others in terms of robustness and power. The authors recommend that kurtosis and skewness indices be presented in statistical computer program packages such as SPSS to guide the data analyst in choosing which test would provide the highest robustness and power.
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-01-01
This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.
Energy Technology Data Exchange (ETDEWEB)
1993-12-31
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. During this quarter, long-term testing of the LNB + AOFA configuration continued and no parametric testing was performed. Further full-load optimization of the LNB + AOFA system began on March 30, 1993. Following completion of this optimization, comprehensive testing in this configuration will be performed including diagnostic, performance, verification, long-term, and chemical emissions testing. These tests are scheduled to start in May 1993 and continue through August 1993. Preliminary engineering and procurement are progressing on the Advanced Low NOx Digital Controls scope addition to the wall-fired project. The primary activities during this quarter include (1) refinement of the input/output lists, (2) procurement of the distributed digital control system, (3) configuration training, and (4) revision of schedule to accommodate project approval cycle and change in unit outage dates.
Institute of Scientific and Technical Information of China (English)
王增; 翁琳; 游隽; 程斌
2011-01-01
Objective To investigate the changes of serum levels of tumor markers during chemotherapy of single pemetrexed or combination with platinum in patients with advanced non -small cell lung cancer ( NSCLC ).Methods 102 advanced NSCLC patients who experienced more than 2 cycles of chemotherapy by pemetrexed single agent or combination with platinum were retrospective analyzed.The changes of CEA, CA125, CYFRA21-1, NSE, SCC and changes of chest CT scan were recorded before and after chemotherapy.Results After chemotherapy, tumor markers like CEA, CA125, CYFRA21-1 , NSE and SCC were decreased, in which CEA, CA125 and CYFRA21-1 diminished 19.3％ ( P＜0.05), 24.8％ ( P＜0.05), and 18.5％ (P＜0.05) , respectively.It was suggested that the correlation between tumor marker response (TMR)and imaging-based response ( IBR ) of CEA and CA125 was positive; the correlation between TMR and IBR of CYFRA 21 -1 was also positive; that between TMR and IBR of CEA, CA125 and CA19-9 joint inspection was positive as well.Conclusion Serum CEA, CA125 and CYFRA21-1 represent reliable markers for chemotherapy efficacy on patients with advanced NSCLC.Monitoring changes of serum levels of tumor markers would benefit for assessment of chemotherapy efficacy , which is simple, economic and useful in clinical settings.%目的 观察晚期非小细胞肺癌(NSCLC)患者在培美曲塞单药或联合铂类化疗后血清肿瘤标志物的变化.方法 回顾性分析接受培美曲塞单药或联合铂类方案化疗的NSCLC患者102例,观察化疗前和化疗第2个周期后肿瘤标志物癌胚抗原(CEA)、糖类抗原125(CA125)、细胞角质素片段抗原21-1(CYFRA21-1)、神经元特异性烯醇化酶(NSE)、鳞癌相关抗原(SCC)的变化,并根据化疗前后CT 等影像学结果的改变来进行对比.结果 化疗第2 个周期后CEA、CA125、CYFRA21-1 、NSE、SCC的均值与化疗前都有不同程度下降,其中CEA下降19.3%(P＜0.05),CA125下降24.8%(P＜0.05),CYFRA21-1
Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A
2016-01-01
Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930
Variance-based fingerprint distance adjustment algorithm for indoor localization
Institute of Scientific and Technical Information of China (English)
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.
2016-08-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.
Variance and covariance calculations for nuclear materials accounting using ''PROFF''
International Nuclear Information System (INIS)
To determine the detection sensitivity of a materials accounting system to the loss of Special Nuclear Material (SNM) requires: (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for those measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. We have developed an interactive, menu-driven computer program, called PROFF (for PROcessing and Fuel Facilities), that considerably reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system. PROFF asks questions of the user to establish the form of each term in the materials balance equation, possible correlations between them, and whether the measured quantities are characterized by an additive or multiplicative error model. Then for each term of the materials balance equation, it presents the user with a menu that is to be completed with values of the SNM concentration, mass (or volume), measurement error standard deviations, and the number of measurements made during the accounting period. On completion of all the data menus, PROFF presents the variance of the materials balance and the square root of this variance, so that the sensitivity of the accounting system can be determined. PROFF is programmed in TURBO-PASCAL for micro-computers using MS-DOS 2.1 (IBM and compatibles)
CMB-S4 and the Hemispherical Variance Anomaly
O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D
2016-01-01
Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...
Expectation Values and Variance Based on Lp-Norms
Directory of Open Access Journals (Sweden)
George Livadiotis
2012-11-01
Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.
Impact of Damping Uncertainty on SEA Model Response Variance
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Extragalactic number counts at 100 um, free from cosmic variance
Sibthorpe, B; Massey, R J; Roseboom, I G; van der Werf, P; Matthews, B C; Greaves, J S
2012-01-01
We use data from the Disc Emission via a Bias-free Reconnaissance in the Infrared/Submillimetre (DEBRIS) survey, taken at 100 um with the Photoconductor Array Camera and Spectrometer instrument on board the Herschel Space Observatory, to make a cosmic variance independent measurement of the extragalactic number counts. These data consist of 323 small-area mapping observations performed uniformly across the sky, and thus represent a sparse sampling of the astronomical sky with an effective coverage of ~2.5 deg^2. We find our cosmic variance independent analysis to be consistent with previous count measurements made using relatively small area surveys. Furthermore, we find no statistically significant cosmic variance on any scale within the errors of our data. Finally, we interpret these results to estimate the probability of galaxy source confusion in the study of debris discs.
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Variance squeezing and entanglement of the XX central spin model
Energy Technology Data Exchange (ETDEWEB)
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Variance in trace constituents following the final stratospheric warming
Hess, Peter
1990-01-01
Concentration variations with time in trace stratospheric constituents N2O, CF2Cl2, CFCl3, and CH4 were investigated using samples collected aboard balloons flown over southern France during the summer months of 1977-1979. Data are analyzed using a tracer transport model, and the mechanisms behind the modeled tracer variance are examined. An analysis of the N2O profiles for the month of June showed that a large fraction of the variance reported by Ehhalt et al. (1983) is on an interannual time scale.
Recursive identification for multidimensional ARMA processes with increasing variances
Institute of Scientific and Technical Information of China (English)
CHEN Hanfu
2005-01-01
In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β＞ 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.
The density variance -- Mach number relation in supersonic, isothermal turbulence
Price, Daniel J.; Federrath, Christoph; Brunt, Christopher M.
2010-01-01
We examine the relation between the density variance and the mean-square Mach number in supersonic, isothermal turbulence, assumed in several recent analytic models of the star formation process. From a series of calculations of supersonic, hydrodynamic turbulence driven using purely solenoidal Fourier modes, we find that the `standard' relationship between the variance in the log of density and the Mach number squared, i.e., sigma^2_(ln rho/rhobar)=ln (1+b^2 M^2), with b = 1/3 is a good fit ...
The Column Density Variance-Sonic Mach Number Relationship
Burkhart, Blakesley; Lazarian, A.
2012-01-01
Although there are a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are very few observational studies investigating the relationship between the density variance ($\\sigma^2$) and the sonic Mach number (${\\cal M}_s$). This is in part due to the fact that the $\\sigma^2$-${\\cal M}_s$ relationship is derived, via MHD simulations, for the 3D density variance only, which is not a direct observable. We investigate the utility of a 2D column density $\\...
On Variance and Covariance for Bounded Linear Operators
Institute of Scientific and Technical Information of China (English)
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Institute of Scientific and Technical Information of China (English)
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Directory of Open Access Journals (Sweden)
Richard Kones
2010-08-01
Full Text Available Richard KonesThe Cardiometabolic Research Institute, Houston, Texas, USAAbstract: The objectives in treating angina are relief of pain and prevention of disease progression through risk reduction. Mechanisms, indications, clinical forms, doses, and side effects of the traditional antianginal agents – nitrates, ß-blockers, and calcium channel blockers – are reviewed. A number of patients have contraindications or remain unrelieved from anginal discomfort with these drugs. Among newer alternatives, ranolazine, recently approved in the United States, indirectly prevents the intracellular calcium overload involved in cardiac ischemia and is a welcome addition to available treatments. None, however, are disease-modifying agents. Two options for refractory angina, enhanced external counterpulsation and spinal cord stimulation (SCS, are presented in detail. They are both well-studied and are effective means of treating at least some patients with this perplexing form of angina. Traditional modifiable risk factors for coronary artery disease (CAD – smoking, hypertension, dyslipidemia, diabetes, and obesity – account for most of the population-attributable risk. Individual therapy of high-risk patients differs from population-wide efforts to prevent risk factors from appearing or reducing their severity, in order to lower the national burden of disease. Current American College of Cardiology/American Heart Association guidelines to lower risk in patients with chronic angina are reviewed. The Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE trial showed that in patients with stable angina, optimal medical therapy alone and percutaneous coronary intervention (PCI with medical therapy were equal in preventing myocardial infarction and death. The integration of COURAGE results into current practice is discussed. For patients who are unstable, with very high risk, with left main coronary artery lesions, in
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
Institute of Scientific and Technical Information of China (English)
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi
2016-01-28
In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR. PMID:26739885
Bobrowska, Alicja; Domonik, Andrzej
2015-09-01
In constructions, the usefulness of modern technical diagnostics of stone as a raw material requires predicting the effects of long-term environmental impact of its qualities and geomechanical properties. The paper presents geomechanical research enabling presentation of the factors for strength loss of the stone and forecasting the rate of development of destructive phenomena on the stone structure on a long-time basis. As research material Turkish travertines were selected from the Denizli-Kaklık Basin (Pamukkale and Hierapolis quarries), which have been commonly used for centuries in global architecture. The rock material was subjected to testing of the impact of various environmental factors, as well as European standards recommended by the author of the research program. Their resistance to the crystallization of salts from aqueous solutions and the effects of SO2, as well as the effect of frost and high temperatures are presented. The studies allowed establishing the following quantitative indicators: the ultrasonic waves index (IVp) and the strength reduction index (IRc). Reflections on the assessment of deterioration effects indicate that the most active factors decreasing travertine resistance in the aging process include frost and sulphur dioxide (SO2). Their negative influence is particularly intense when the stone material is already strongly weathered.
Variance-optimal hedging for processes with stationary independent increments
DEFF Research Database (Denmark)
Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.
We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
[ECoG classification based on wavelet variance].
Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin
2013-06-01
For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
DEFF Research Database (Denmark)
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known princip
Partitioning the Variance in Scores on Classroom Environment Instruments
Dorman, Jeffrey P.
2009-01-01
This paper reports the partitioning of variance in scale scores from the use of three classroom environment instruments. Data sets from the administration of the What Is Happening In this Class (WIHIC) to 4,146 students, the Questionnaire on Teacher Interaction (QTI) to 2,167 students and the Catholic School Classroom Environment Questionnaire…
Intuitive Analysis of Variance-- A Formative Assessment Approach
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these...
Variance in parametric images: direct estimation from parametric projections
International Nuclear Information System (INIS)
Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)
Unbiased Estimates of Variance Components with Bootstrap Procedures
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
A note on minimum-variance theory and beyond
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
Asymptotic variance of grey-scale surface area estimators
DEFF Research Database (Denmark)
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...
Explaining Common Variance Shared by Early Numeracy and Literacy
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Diffusion-Based Trajectory Observers with Variance Constraints
DEFF Research Database (Denmark)
Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo;
of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
Variance-based uncertainty relations for incompatible observables
Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu
2016-06-01
We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
A Visual Model for the Variance and Standard Deviation
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Gender variance in Asia: discursive contestations and legal implications
S.E. Wieringa
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic covariances estimated from the model accounting for heterogeneous variances resulted in genetic
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic
Energy Technology Data Exchange (ETDEWEB)
Rompel, Oliver; Janka, Rolf; Lell, Michael M.; Uder, Michael; Hammon, Matthias [University Hospital Erlangen, Department of Radiology, Erlangen (Germany); Gloeckler, Martin; Dittrich, Sven [University Hospital Erlangen, Department of Pediatric Cardiology, Erlangen (Germany); Cesnjevar, Robert [University Hospital Erlangen, Department of Pediatric Cardiac Surgery, Erlangen (Germany)
2016-04-15
Many technical updates have been made in multi-detector CT. To evaluate image quality and radiation dose of high-pitch second- and third-generation dual-source chest CT angiography and to assess the effects of different levels of advanced modeled iterative reconstruction (ADMIRE) in newborns and children. Chest CT angiography (70 kVp) was performed in 42 children (age 158 ± 267 days, range 1-1,194 days). We evaluated subjective and objective image quality, and radiation dose with filtered back projection (FBP) and different strength levels of ADMIRE. For comparison were 42 matched controls examined with a second-generation 128-slice dual-source CT-scanner (80 kVp). ADMIRE demonstrated improved objective and subjective image quality (P <.01). Mean signal/noise, contrast/noise and subjective image quality were 11.9, 10.0 and 1.9, respectively, for the 80 kVp mode and 11.2, 10.0 and 1.9 for the 70 kVp mode. With ADMIRE, the corresponding values for the 70 kVp mode were 13.7, 12.1 and 1.4 at strength level 2 and 17.6, 15.6 and 1.2 at strength level 4. Mean CTDI{sub vol}, DLP and effective dose were significantly lower with the 70-kVp mode (0.31 mGy, 5.33 mGy*cm, 0.36 mSv) compared to the 80-kVp mode (0.46 mGy, 9.17 mGy*cm, 0.62 mSv; P <.01). The third-generation dual-source CT at 70 kVp provided good objective and subjective image quality at lower radiation exposure. ADMIRE improved objective and subjective image quality. (orig.)
International Nuclear Information System (INIS)
Many technical updates have been made in multi-detector CT. To evaluate image quality and radiation dose of high-pitch second- and third-generation dual-source chest CT angiography and to assess the effects of different levels of advanced modeled iterative reconstruction (ADMIRE) in newborns and children. Chest CT angiography (70 kVp) was performed in 42 children (age 158 ± 267 days, range 1-1,194 days). We evaluated subjective and objective image quality, and radiation dose with filtered back projection (FBP) and different strength levels of ADMIRE. For comparison were 42 matched controls examined with a second-generation 128-slice dual-source CT-scanner (80 kVp). ADMIRE demonstrated improved objective and subjective image quality (P <.01). Mean signal/noise, contrast/noise and subjective image quality were 11.9, 10.0 and 1.9, respectively, for the 80 kVp mode and 11.2, 10.0 and 1.9 for the 70 kVp mode. With ADMIRE, the corresponding values for the 70 kVp mode were 13.7, 12.1 and 1.4 at strength level 2 and 17.6, 15.6 and 1.2 at strength level 4. Mean CTDIvol, DLP and effective dose were significantly lower with the 70-kVp mode (0.31 mGy, 5.33 mGy*cm, 0.36 mSv) compared to the 80-kVp mode (0.46 mGy, 9.17 mGy*cm, 0.62 mSv; P <.01). The third-generation dual-source CT at 70 kVp provided good objective and subjective image quality at lower radiation exposure. ADMIRE improved objective and subjective image quality. (orig.)
Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi
2016-01-01
In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR.In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread
肿瘤中Axin表达减少的机制及其研究进展%Advances in research on mechanisms of Axin reduction in tumor
Institute of Scientific and Technical Information of China (English)
周明祎
2011-01-01
a tumor inhibitor, Axin protein expression decreases in many malignant carcinoma. The mechanism of Axin reduction is still undear. It may be associated with gene mutation, promoter methylation, protein degradation, and various small molecules. This review mainly summarized the latest progress in research on mechanism of Axin reduction.
Energy Technology Data Exchange (ETDEWEB)
Hayami, M.; Matsui, Y. [Hitachi, Ltd., Tokyo (Japan)
1998-07-01
Electric power companies in Japan are making efforts to reduce the cost by improving the operation rate of existing facilities through the employment of advanced automation systems in the sector of distribution. This paper introduces the systems of Hitachi. A 22 kV-line automation system using high-speed photo-transmission line is adopted for the maintenance of widely extended distribution facilities. This system includes a 22 kV/240-415 V transformer and a 22 kV/105-210 V transformer. To supervise and control these transformers and switches, and to recover the accidents, this system consists of a computer system, a remote host station, and remote end terminals. Based on the information of distribution facilities of substations, end terminals and a host station, monitor/control of these facilities and recovery of accidents are conducted using computers. A system plan supporting system is also introduced, which aims at improvements of facility utilization factor, operation efficiency, and distribution operation efficiency. 5 figs.
Logistics Reduction and Repurposing Project
National Aeronautics and Space Administration — The Advanced Exploration Systems (AES) Logistics Reduction and Repurposing (LRR) project will enable a mission-independent cradle-to-grave-to-cradle...
Disaster risk reduction and mobility
Directory of Open Access Journals (Sweden)
Patrice Quesada
2014-02-01
Full Text Available An essential step for advancing risk reduction measures at the local level is to define mobility-based indicators of vulnerability and resilience that can contribute to measuring and reducing human and economic losses resulting from disasters.
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867
Gravity Wave Variances and Propagation Derived from AIRS Radiances
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Ortiz, Isabel
2007-01-01
The paper reviews poverty trends and measurements, poverty reduction in historical perspective, the poverty-inequality-growth debate, national poverty reduction strategies, criticisms of the agenda and the need for redistribution, international policies for poverty reduction, and ultimately understanding poverty at a global scale. It belongs to a series of backgrounders developed at Joseph Stiglitz's Initiative for Policy Dialogue.
Institute of Scientific and Technical Information of China (English)
苏清发; 刘亚敏; 陈杰; 潘华; 施耀
2009-01-01
The emission of nitrogen oxides (NOx) from stationary sources, primarily from power stations, industrial heaters and cogeneration plants, represents a major environmental problem. This paper intends to give a general review over the advances in non-thermal plasma assisted selective catalytic reduction (SCR) of NOx with lower hydrocarbon compounds. In the last decade, the non-thermal plasma induced SCR of nitrogen oxide with low hydrocarbon compounds has received much attention. The different hydrocarbons (≤C3) used in the research are discussed. As we know,methane is more difficultly activated than non-methane hydrocarbons, such as ethylene and propylene etc. The reduction mechanism is also discussed. In addition, aiming at the difficulties existed, the direction for future research is prospected.%综述了近年来低温等离子体诱导低碳烃选择性催化还原NOx的研究进展,详细介绍了难活化的甲烷及较易活化的非甲烷低碳烃气体如乙烯、丙烯及丙烷等的研究现状,探讨了低温等离子体诱导低碳烃选择性催化还原NOx的反应机理,并展望了低温等离子体诱导低碳烃选择性催化还原NOx今后研究方向.
Babić, Nevio; Večenaj, Željko; De Wekker, Stephan F. J.
2016-04-01
Various criteria have been developed to remove non-stationarity in turbulence time series, though it remains unclear how the choice of the stationarity criterion affects similarity functions in the framework of the Monin-Obukhov similarity theory. To investigate this, we use stationary datasets that result from applying five common criteria to remove non-stationarity in turbulence time series from the Terrain-Induced Rotor EXperiment conducted in Owens Valley, California. We determine the form of the flux-variance similarity functions and the scatter around these similarity functions for all five stationary datasets. Data were collected at two valley locations and one slope location using 34-m flux towers with six levels of turbulence measurements. Our results show (i) systematic differences from previously found near-neutral values of the parameters in the flux-variance similarity functions over flat terrain, indicating a larger anisotropy of the flow over complex than over flat terrain, (ii) a reduction of this anisotropy when stationary data are used, with the amount of reduction depending on the stationarity criterion, (iii) a general reduction in scatter around the similarity functions when using stationary data but more so for stable than for unstable stratification, and for valley locations than for the slope location, and (iv) a weak variation with height of near-neutral values of parameters in the flux-variance similarity functions.
Energy Technology Data Exchange (ETDEWEB)
Noam Lior; Stuart W. Churchill
2003-10-01
the Gordon Conference on Modern Development in Thermodynamics. The results obtained are very encouraging for the development of the RCSC as a commercial burner for significant reduction of NO{sub x} emissions, and highly warrants further study and development.
Directory of Open Access Journals (Sweden)
Zinn Manfred
2011-04-01
Full Text Available Abstract Background The substitution of plastics based on fossil raw material by biodegradable plastics produced from renewable resources is of crucial importance in a context of oil scarcity and overflowing plastic landfills. One of the most promising organisms for the manufacturing of medium-chain-length polyhydroxyalkanoates (mcl-PHA is Pseudomonas putida KT2440 which can accumulate large amounts of polymer from cheap substrates such as glucose. Current research focuses on enhancing the strain production capacity and synthesizing polymers with novel material properties. Many of the corresponding protocols for strain engineering rely on the rifampicin-resistant variant, P. putida KT2442. However, it remains unclear whether these two strains can be treated as equivalent in terms of mcl-PHA production, as the underlying antibiotic resistance mechanism involves a modification in the RNA polymerase and thus has ample potential for interfering with global transcription. Results To assess PHA production in P. putida KT2440 and KT2442, we characterized the growth and PHA accumulation on three categories of substrate: PHA-related (octanoate, PHA-unrelated (gluconate and poor PHA substrate (citrate. The strains showed clear differences of growth rate on gluconate and citrate (reduction for KT2442 > 3-fold and > 1.5-fold, respectively but not on octanoate. In addition, P. putida KT2442 PHA-free biomass significantly decreased after nitrogen depletion on gluconate. In an attempt to narrow down the range of possible reasons for this different behavior, the uptake of gluconate and extracellular release of the oxidized product 2-ketogluconate were measured. The results suggested that the reason has to be an inefficient transport or metabolization of 2-ketogluconate while an alteration of gluconate uptake and conversion to 2-ketogluconate could be excluded. Conclusions The study illustrates that the recruitment of a pleiotropic mutation, whose effects might
Institute of Scientific and Technical Information of China (English)
赵吝加; 曾维华; 许乃中; 温宗国
2012-01-01
Currently, domestic environmental technologies evaluation methods are mainly based on experts' qualitative estimate and lack of comprehensive evaluation methods. By setting an indicator system of energy conservation and emissions reduction, determining the indicator weight, and constructing the evaluation factors set and its membership function based on AHP and Fuzzy comprehensive evaluation, an evaluation method of advanced and available technologies for energy conservation and emissions reduction of deinking process was established. U-sing this evaluation method, flotation method can be picked out as an advanced and appropriate technology for energy conservation and emissions reduction of deinking process from three technologies including washing, flotation-washing, and flotation. This evaluation method provides a basic method for decision making on deinking technologies evaluation and selection.%针对国内环境技术评估以定性判断为主,缺乏综合评估方法的状况,通过设置节能减排先进适用技术指标体系、确定指标权重、构建评估因素集及其隶属函数等过程,建立基于层次分析和模糊综合评估的定性与定量方法相结合的造纸行业废纸脱墨工艺节能减排先进适用技术评估方法.应用该方法,在洗涤法脱墨技术、浮选法脱墨技术、浮选-洗涤法脱墨技术3项技术中,筛选出浮选法脱墨技术作为废纸制浆脱墨工艺重点推广的节能减排先进适用技术,其余两项技术中,洗涤法脱墨技术优于浮选-洗涤法脱墨技术.
40 CFR 142.302 - Who can issue a small system variance?
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.302 Who can issue a small system variance? A small system variance under...
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...
40 CFR 142.21 - State consideration of a variance or exemption request.
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...
Convergence of Recursive Identification for ARMAX Process with Increasing Variances
Institute of Scientific and Technical Information of China (English)
JIN Ya; LUO Guiming
2007-01-01
The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.
Optimization of radio astronomical observations using Allan variance measurements
Schieder, R
2001-01-01
Stability tests based on the Allan variance method have become a standard procedure for the evaluation of the quality of radio-astronomical instrumentation. They are very simple and simulate the situation when detecting weak signals buried in large noise fluctuations. For the special conditions during observations an outline of the basic properties of the Allan variance is given, and some guidelines how to interpret the results of the measurements are presented. Based on a rather simple mathematical treatment clear rules for observations in ``Position-Switch'', ``Beam-'' or ``Frequency-Switch'', ``On-The-Fly-'' and ``Raster-Mapping'' mode are derived. Also, a simple ``rule of the thumb'' for an estimate of the optimum timing for the observations is found. The analysis leads to a conclusive strategy how to plan radio-astronomical observations. Particularly for air- and space-borne observatories it is very important to determine, how the extremely precious observing time can be used with maximum efficiency. The...
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
No evidence for anomalously low variance circles on the sky
Moss, Adam; Zibin, James P
2010-01-01
In a recent paper, Gurzadyan & Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan & Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.
Extended Active Contour Algorithm Based on Color Variance
Institute of Scientific and Technical Information of China (English)
Seung-tae LEE; Young-jun HAN; Hern-soo HAHN
2010-01-01
General active contour algorithm,which uses the intensity of the image,has been used to actively segment chjects.Because the cbjects have a similar intensity but different colors,it is difficult to segment any object from the others.Moreover,this algorithm can only be used in the simple environment since it is very sensitive to noise.In order to solve these problems.This paper proposes an extended active contour algarithm based on a color variance.In complex images,the color variance energy as the image energy is introduced into the general active contour algorithm.Experimental results show that the proposed active contour algorithm is very effective in various environments.
Fidelity between Gaussian mixed states with quantum state quadrature variances
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Identifiability, stratification and minimum variance estimation of causal effects.
Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi
2005-10-15
The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123
Validation technique using mean and variance of kriging model
Energy Technology Data Exchange (ETDEWEB)
Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2007-07-01
To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.
Sample variance and Lyman-alpha forest transmission statistics
Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick
2012-01-01
We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response variance in functional maps: neural darwinism revisited.
Directory of Open Access Journals (Sweden)
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
Variance in multiplex suspension array assays: microsphere size variation impact
Directory of Open Access Journals (Sweden)
Cheng R Holland
2007-08-01
Full Text Available Abstract Background Luminex suspension microarray assays are in widespread use. There are issues of variability of assay readings using this technology. Methods and results Size variation is demonstrated by transmission electron microscopy. Size variations of microspheres are shown to occur in stepwise increments. A strong correspondence between microsphere size distribution and distribution of fluorescent events from assays is shown. An estimate is made of contribution of microsphere size variation to assay variance. Conclusion A probable significant cause of variance in suspended microsphere assay results is variation in microsphere diameter. This can potentially be addressed by changes in the manufacturing process. Provision to users of mean size, median size, skew, the number of standard deviations that half the size range represents (sigma multiple, and standard deviation is recommended. Establishing a higher sigma multiple for microsphere production is likely to deliver a significant improvement in precision of raw instrument readings. Further research is recommended on the molecular architecture of microsphere coatings.
A surface layer variance heat budget for ENSO
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Explaining the Prevalence, Scaling and Variance of Urban Phenomena
Gomez-Lievano, Andres; Hausmann, Ricardo
2016-01-01
The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.
Epistasis and Its Contribution to Genetic Variance Components
Cheverud, J M; Routman, E J
1995-01-01
We present a new parameterization of physiological epistasis that allows the measurement of epistasis separate from its effects on the interaction (epistatic) genetic variance component. Epistasis is the deviation of two-locus genotypic values from the sum of the contributing single-locus genotypic values. This parameterization leads to statistical tests for epistasis given estimates of two-locus genotypic values such as can be obtained from quantitative trait locus studies. The contributions...
Empirical Performance of the Constant Elasticity Variance Option Pricing Model
Ren-Raw Chen; Cheng-Few Lee; Han-Hsing Lee
2009-01-01
In this essay, we empirically test the Constant–Elasticity-of-Variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy based analysis of their numerical procedures.In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those ...
Stream sampling for variance-optimal estimation of subset sums
Cohen, Edith; Duffield, Nick; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\\varoptk$, that dominates all previous schemes in terms of estimation quality. $\\varoptk$ provides {\\em variance optimal unbiased estimation of subset sum...
Variance computations for functional of absolute risk estimates
Pfeiffer, R. M.; E. Petracci
2011-01-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function base...
What Do We Know About Variance in Accounting Profitability?
Anita M McGahan; Porter, Michael E.
2002-01-01
In this paper, we analyze the variance of accounting profitability among a broad cross-section of forms in the American economy from 1981 to 1994. The purpose of the analysis is to identify the importance of year, industry, corporate-parent, and business-specific effects on accounting profitability among operating businesses across sectors. The findings indicate that industry and corporate-parent effects are important and related to one another. As expected, business-specific effects, which a...
Constraining the local variance of H0 from directional analyses
Bengaly, C. A. P., Jr.
2016-04-01
We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.
From the Editors: Common method variance in international business research
Sea-Jin Chang; Arjen van Witteloostuijn; Lorraine Eden
2010-01-01
JIBS receives many manuscripts that report findings from analyzing survey data based on same-respondent replies. This can be problematic since same-respondent studies can suffer from common method variance (CMV). Currently, authors who submit manuscripts to JIBS that appear to suffer from CMV are asked to perform validity checks and resubmit their manuscripts. This letter from the Editors is designed to outline the current state of best practice for handling CMV in international business rese...
Imaging structural co-variance between human brain regions
Alexander-Bloch, Aaron; Giedd, Jay N.; Bullmore, Ed
2013-01-01
Brain structure varies between people in a markedly organized fashion. Communities of brain regions co-vary in their morphological properties. For example, cortical thickness in one region influences the thickness of structurally and functionally connected regions. Such networks of structural co-variance partially recapitulate the functional networks of healthy individuals and the foci of grey matter loss in neurodegenerative disease. This architecture is genetically heritable, is associated ...
Analysis of Variance in the Modern Design of Experiments
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Recombining binomial tree for constant elasticity of variance process
Hi Jun Choe; Jeong Ho Chu; So Jeong Shin
2014-01-01
The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...
Variance optimal sampling based estimation of subset sums
Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
The Column Density Variance-{\\cal M}_s Relationship
Burkhart, Blakesley; Lazarian, A.
2012-08-01
Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance (σ2) and the sonic Mach number ({\\cal M}_s). This is in part due to the fact that the σ2-{\\cal M}_s relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density \\sigma _{\\Sigma /\\Sigma _0}^2-{\\cal M}_s relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density \\sigma _{\\rho /\\rho _0}^2-{\\cal M}_s trend but includes a scaling parameter A such that \\sigma _{\\ln (\\Sigma /\\Sigma _0)}^2=A\\times \\ln (1+b^2{\\cal M}_s^2), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of σ2 for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.
VAPOR: variance-aware per-pixel optimal resource allocation.
Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K
2006-02-01
Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799
Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse
Institute of Scientific and Technical Information of China (English)
Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou
2016-01-01
In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.
Litzow, M.A.; Piatt, J.F.
2003-01-01
We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
International Nuclear Information System (INIS)
Commercial wind turbines manufactured today reliably generate electrical energy at approximately $0.07 - $0.09 per kWh, depending on the wind speeds at the site and the nature of the terrain. This paper reports that to be competitive with other electricity generation technologies these costs must be reduced 30 - 50% if current electricity pricing practices continue. Reductions of this magnitude can be achieved through reductions in wind turbine capital costs, increases in efficiency, and changes in the financial market's perception of wind energy technology. Advanced technology can make a significant contribution in each of these areas
How a hurricane disturbance influences extreme CO2 fluxes and variance in a tropical forest
International Nuclear Information System (INIS)
A current challenge is to understand what are the legacies left by disturbances on ecosystems for predicting response patterns and trajectories. This work focuses on the ecological implications of a major hurricane and analyzes its influence on forest gross primary productivity (GPP; derived from the moderate-resolution imaging spectroradiometer, MODIS) and soil CO2 efflux. Following the hurricane, there was a reduction of nearly 0.5 kgC m−2 yr−1, equivalent to ∼15% of the long-term mean GPP (∼3.0 ± 0.2 kgC m−2 yr−1; years 2003–8). Annual soil CO2 emissions for the year following the hurricane were > 3.9 ± 0.5 kgC m−2 yr−1, whereas for the second year emissions were 1.7 ± 0.4 kgC m−2 yr−1. Higher annual emissions were associated with higher probabilities of days with extreme soil CO2 efflux rates ( > 9.7 μmol CO2 m−2 s−1). The variance of GPP was highly variable across years and was substantially increased following the hurricane. Extreme soil CO2 efflux after the hurricane was associated with deposition of nitrogen-rich fresh organic matter, higher basal soil CO2 efflux rates and changes in variance of the soil temperature. These results show that CO2 dynamics are highly variable following hurricanes, but also demonstrate the strong resilience of tropical forests following these events. (letter)
Institute of Scientific and Technical Information of China (English)
唐伟椿; 成守礼
1983-01-01
From Nov.,1975 to July,1982,80 cases(51 males and 29 females)of intussusceotion were operated on.Among them,31 rectal inflation reduction failed.49 cases were advanced intussusdeption including some small intestinal intussusception.66 cases were primary.62 children were aged under one.Most of them had either enlarged regional mesenteric lymph node or mobile cecum.14 had secondary intussusceptions,13 of whom aged over one.There were 5 cases of Meckel's diverticulum,4 polyps,4 ileal duplications and one allergic purpura complicated with hematoma in the anterior wall of the cecum.Manual reductions were accomplished in 58 patients,together with simultaneous appendectomy.No plication of the cecum was attempted nor relapse noted.Intestinal resection followed by anastomosis was performed in 22 cases for intestinal gangrene.While rectal inflation on two patients with intestinal perforation was not successful,surgical repair was performed immediately.Only one death due to preoperative pneumonia and chickenpox was recorded.Thus mortality rate was 1.25%.%@@ 肠套迭是婴儿常见的急腹症,自从应用空气灌肠治疗以来,早期肠套迭的整复治疗取得了肯定的疗效,显著地降低了手术率.但对于复杂型和晚期肠套迭的病例使用空气灌肠,不但难以奏效,而且往往发生危险,而仍需手术治疗.
DEFF Research Database (Denmark)
Ashraf, Bilal; Fé, Dario; Jensen, Just;
2014-01-01
Advancement in next generation sequencing (NGS) technologies has significantly decreased the cost of DNA sequencing enabling increased use of genotyping by sequencing (GBS) in several plant species. In contrast to array-based genotyping GBS also allows for easy estimation of allele frequencies...... at each SNP in family pools or polyploids. There are, however, several statistical challenges associated with this method, including low sequencing depth and missing values. Low sequencing depth results in inaccuracies in estimates of allele frequencies for each SNP. In this work we have focused...... on optimizing methods and models utilizing F2 family phenotype records and NGS information from F2 family pools in perennial ryegrass. Genomic variance was estimated using genomic relationship matrices based on different coverage depths to verify effects of coverage depth. Example traits were seed yield, rust...
Variance as a Leading Indicator of Regime Shift in Ecosystem Services
Directory of Open Access Journals (Sweden)
Stephen R. Carpenter
2006-12-01
Full Text Available Many environmental conflicts involve pollutants such as greenhouse gas emissions that are dispersed through space and cause losses of ecosystem services. As pollutant emissions rise in one place, a spatial cascade of declining ecosystem services can spread across a larger landscape because of the dispersion of the pollutant. This paper considers the problem of anticipating such spatial regime shifts by monitoring time series of the pollutant or associated ecosystem services. Using such data, it is possible to construct indicators that rise sharply in advance of regime shifts. Specifically, the maximum eigenvalue of the variance-covariance matrix of the multivariate time series of pollutants and ecosystem services rises prior to the regime shift. No specific knowledge of the mechanisms underlying the regime shift is needed to construct the indicator. Such leading indicators of regime shifts could provide useful signals to management agencies or to investors in ecosystem service markets.
An Empirical Temperature Variance Source Model in Heated Jets
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Regression between earthquake magnitudes having errors with known variances
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
FMRI group analysis combining effect estimates and their variances.
Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W
2012-03-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
A guide to SPSS for analysis of variance
Levine, Gustav
2013-01-01
This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce
A generalization of Talagrand's variance bound in terms of influences
Kiss, Demeter
2010-01-01
Consider a random variable of the form f(X_1,...,X_n), where f is a deterministic function, and where X_1,...,X_n are i.i.d random variables. For the case where X_1 has a Bernoulli distribution, Talagrand (1994) gave an upper bound for the variance of f in terms of the individual influences of the variables X_i for i=1,...,n. We generalize this result to the case where X_1 takes finitely many vales.
Stable limits for sums of dependent infinite variance random variables
DEFF Research Database (Denmark)
Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;
2011-01-01
The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...
Two-dimensional finite-element temperature variance analysis
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Variance and bias computation for enhanced system identification
Bergmann, Martin; Longman, Richard W.; Juang, Jer-Nan
1989-01-01
A study is made of the use of a series of variance and bias confidence criteria recently developed for the eigensystem realization algorithm (ERA) identification technique. The criteria are shown to be very effective, not only for indicating the accuracy of the identification results (especially in terms of confidence intervals), but also for helping the ERA user to obtain better results. They help determine the best sample interval, the true system order, how much data to use and whether to introduce gaps in the data used, what dimension Hankel matrix to use, and how to limit the bias or correct for bias in the estimates.
Variance Risk Premium Differentials and Foreign Exchange Returns
Arash, Aloosh
2011-01-01
The uncovered interest rate parity does not hold in the foreign exchange market (UIP puzzle). I use the cross-country variance risk premium differential to measure the excess foreign exchange return. Consequently, similar to Bansal and Shaliastovich (2010), I provide a risk-based explanation for the violation of UIP. The empirical results, based on the monthly data of ten currency pairs among US Dollar, UK Pound, Japanese Yen, Euro, and Swiss Franc, support the model both in-sample and out-of...
Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...
Analysis of variance tables based on experimental structure.
Brien, C J
1983-03-01
A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362
A Mean-Variance Portfolio Optimal Under Utility Pricing
Directory of Open Access Journals (Sweden)
HÃ¼rlimann Werner
2006-01-01
Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Influence of genetic variance on sodium sensitivity of blood pressure.
Luft, F C; Miller, J Z; Weinberger, M H; Grim, C E; Daugherty, S A; Christian, J C
1987-02-01
To examine the effect of genetic variance on blood pressure, sodium homeostasis, and its regulatory determinants, we studied 37 pairs of monozygotic twins and 18 pairs of dizygotic twins under conditions of volume expansion and contraction. We found that, in addition to blood pressure and body size, sodium excretion in response to provocative maneuvers, glomerular filtration rate, the renin-angiotensin system, and the sympathetic nervous system are influenced by genetic variance. To elucidate the interaction of genetic factors and an environmental influence, namely, salt intake, we restricted dietary sodium in 44 families of twin children. In addition to a modest decrease in blood pressure, we found heterogeneous responses in blood pressure indicative of sodium sensitivity and resistance which were normally distributed. Strong parent-offspring resemblances were found in baseline blood pressures which persisted when adjustments were made for age and weight. Further, mother-offspring resemblances were observed in the change in blood pressure with sodium restriction. We conclude that the control of sodium homeostasis is heritable and that the change in blood pressure with sodium restriction is familial as well. These data speak to the interaction between the genetic susceptibility to hypertension and environmental influences which may result in its expression. PMID:3553721
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Cosmic variance in the nanohertz gravitational wave background
Roebber, Elinore; Holz, Daniel; Warren, Michael
2015-01-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes to estimate the formation rates of supermassive black hole binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive ($\\lesssim10\\%$) to cosmological parameters, with only slight variation between WMAP5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of $\\sim 2$. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly-generated populations of supermassive black holes, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of $\\sim 10$ near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty ...
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C., Jr.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Constraining the local variance of $H_0$ from directional analyses
Bengaly, C A P
2016-01-01
We evaluate the local variance of the Hubble Constant $H_0$ with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison procedure to test whether the bulk flow motion can reconcile the measurement of the Hubble Constant $H_0$ from standard candles ($H_0 = 73.8 \\pm 2.4 \\; \\mathrm{km \\; s}^{-1}\\; \\mathrm{Mpc}^{-1}$) with that of the Planck's Cosmic Microwave Background data ($67.8 \\pm 0.9 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$). We obtain that $H_0$ ranges from $68.9 \\pm 0.5 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ to $71.2 \\pm 0.7 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ through the celestial sphere, with maximal dipolar anisotropy towards the $(l,b) = (315^{\\circ},27^{\\circ})$ direction. Interestingly, this result is in good agreement with both $H_0$ estimations, as well as the bulk flow direction reported in the literature. In addition, we assess the statistical significance of this variance with different prescriptions of Monte Carlo simulations, finding a goo...
Cosmic variance of the spectral index from mode coupling
Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δns| ~ Script O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
Hidden temporal order unveiled in stock market volatility variance
Directory of Open Access Journals (Sweden)
Y. Shapira
2011-06-01
Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Facial Feature Extraction Method Based on Coefficients of Variances
Institute of Scientific and Technical Information of China (English)
Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang
2007-01-01
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.
Risk Management - Variance Minimization or Lower Tail Outcome Elimination
DEFF Research Database (Denmark)
Aabo, Tom
2002-01-01
This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess on......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations.......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...... on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value...
Variance Estimation In Domain Decomposed Monte Carlo Eigenvalue Calculations
International Nuclear Information System (INIS)
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
Energy Technology Data Exchange (ETDEWEB)
Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)
2010-05-15
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.
Determing the frame of minimum Hubble expansion variance
McKay, James H
2015-01-01
We characterize a cosmic rest frame in which the variation of the spherically averaged Hubble expansion is most uniform, under local Lorentz boosts of the central observer. Using the COMPOSITE sample of 4534 galaxies, we identify a degenerate set of candidate minimum variance frames, which includes the rest frame of the Local Group (LG) of galaxies, but excludes the standard Cosmic Microwave Background (CMB) frame. Candidate rest frames defined by a boost from the LG frame close to the plane of the galaxy have a statistical likelihood similar to the LG frame. This may result from a lack of constraining data in the Zone of Avoidance in the COMPOSITE sample. We extend our analysis to the Cosmicflows-2 (CF2) sample of 8,162 galaxies. While the signature of a systematic boost offset between the CMB and LG frames averages is still detected, the spherically averaged expansion variance in all rest frames is significantly larger in the CF2 sample than would be reasonably expected. We trace this to an omission of any ...
Waste Isolation Pilot Plant no-migration variance petition
International Nuclear Information System (INIS)
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989
Cosmic variance of the spectral index from mode coupling
Energy Technology Data Exchange (ETDEWEB)
Bramante, Joseph; Kumar, Jason [Department of Physics and Astronomy, University of Hawaii, 2505 Correa Rd., Honolulu HI (United States); Nelson, Elliot; Shandera, Sarah, E-mail: bramante@hawaii.edu, E-mail: jkumar@hawaii.edu, E-mail: eln121@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802 (United States)
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δn{sub s}| ∼ O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
PET image reconstruction: mean, variance, and optimal minimax criterion
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Th...
Local orbitals by minimizing powers of the orbital variance
DEFF Research Database (Denmark)
Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;
2011-01-01
It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may be...... encountered. These disappear when the exponent is larger than one. For a small penalty, the occupied orbitals are more local than the virtual ones. When the penalty is increased, the locality of the occupied and virtual orbitals becomes similar. In fact, when increasing the cardinal number for Dunning...
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
Energy Technology Data Exchange (ETDEWEB)
TenBarge, J. M.; Klein, K. G.; Howes, G. G. [Department of Physics and Astronomy, University of Iowa, Iowa City, IA (United States); Podesta, J. J., E-mail: jason-tenbarge@uiowa.edu [Space Science Institute, Boulder, CO (United States)
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Simulated central apnea detection using the pressure variance.
Townsend, Daphne I; Holtzman, Megan; Goubran, Rafik; Frize, Monique; Knoefel, Frank
2009-01-01
This paper presents use of an unobtrusive pressure sensor array for simulated central apnea detection. Data was collected from seven volunteers who performed a series of regular breathing and breath holding exercises to simulate central apneas. Results of the feature extraction from the breathing signals show that breathing events may be differentiated with epoch based variance calculations. Two approaches were considered: the single sensor approach and the multisensor vote approach. The multisensor vote approach can decrease false positives and increase the value of Matthew's Correlation Coefficient. The effect of lying position on correct classification was investigated by modifying the multisensor vote approach to reduce false positives segments caused by the balistocardiogram signal and as such increase sensitivity while maintaining a low false positive rate. Intersubject classification results had low variability in both approaches. PMID:19964320
MARKOV-MODULATED MEAN-VARIANCE PROBLEM FOR AN INSURER
Institute of Scientific and Technical Information of China (English)
Wang Wei; Bi Junna
2011-01-01
In this paper, we consider an insurance company which has the option of investing in a risky asset and a risk-free asset, whose price parameters are driven by a finite state Markov chain. The risk process of the insurance company is modeled as a diffusion process whose diffusion and drift parameters switch over time according to the same Markov chain. We study the Markov-modulated mean-variance problem for the insurer and derive explicitly the closed form of the efficient strategy and efficient frontier. In the case of no regime switching, we can see that the efficient frontier in our paper coincides with that of [10] when there is no pure jump.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; Pryor, S. C.
2016-08-01
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Using both statistically simulated and observed data, this paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, the systematic error is negligible but the random error exceeds about 10 %.
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
Directory of Open Access Journals (Sweden)
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Estimation of population variance in contributon Monte Carlo
International Nuclear Information System (INIS)
Based on the theory of contributons, a new Monte Carlo method known as the contributon Monte Carlo method has recently been developed. The method has found applications in several practical shielding problems. The authors analyze theoretically the variance and efficiency of the new method, by taking moments around the score. In order to compare the contributon game with a game of simple geometrical splitting and also to get the optimal placement of the contributon volume, the moments equations were solved numerically for a one-dimensional, one-group problem using a 10-mfp-thick homogeneous slab. It is found that the optimal placement of the contributon volume is adjacent to the detector; even at its most optimal the contributon Monte Carlo is less efficient than geometrical splitting
A comparison between temporal and subband minimum variance adaptive beamforming
DEFF Research Database (Denmark)
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.;
2014-01-01
and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is......This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...... with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving...
Batch variation between branchial cell cultures: An analysis of variance
DEFF Research Database (Denmark)
Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.
2003-01-01
We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
From Means and Variances to Persons and Patterns
Directory of Open Access Journals (Sweden)
James W Grice
2015-07-01
Full Text Available A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation.
Analysis of variance of an underdetermined geodetic displacement problem
Energy Technology Data Exchange (ETDEWEB)
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
Discretely sampled variance and volatility swaps versus their continuous approximations
Jarrow, Robert; Larsson, Martin; Protter, Philip
2011-01-01
Discretely sampled variance and volatility swaps trade actively in OTC markets. To price these swaps, the continuously sampled approximation is often used to simplify the computations. The purpose of this paper is to study the conditions under which this approximation is valid. Our first set of theorems characterize the conditions under which the discretely sampled swap values are finite, given the values of the continuous approximations exist. Surprisingly, for some otherwise reasonable price processes, the discretely sampled swap prices do not exist, thereby invalidating the approximation. Examples are provided. Assuming further that both swap values exist, we study sufficient conditions under which the discretely sampled values converge to their continuous counterparts. Because of its popularity in the literature, we apply our theorems to the 3/2 stochastic volatility model. Although we can show finiteness of all swap values, we can prove convergence of the approximation only for some parameter values.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
IGARCH and variance change in the U.S. long-run interest rate
Ph.H.B.F. Franses (Philip Hans)
1995-01-01
textabstractShows that a one-time variance change in the long run interest rate spuriously suggests that it can be described with an IGARCH process. Detection of variance change using an statistical test; Correlation of variance with a change in monetary policy; Characteristics of ARCH type process.
A Bound on the Variance of the Waiting Time in a Queueing System
Eschenfeldt, Patrick; Pippenger, Nicholas
2011-01-01
Kingman has shown, under very weak conditions on the interarrival- and sevice-time distributions, that First-Come-First-Served minimizes the variance of the waiting time among possible service disciplines. We show, under the same conditions, that Last-Come-First-Served maximizes the variance of the waiting time, thereby giving an upper bound on the variance among all disciplines.
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
2010-10-01
... of variance. 456.524 Section 456.524 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration...
75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls
2010-02-09
... Department of the Army, Corps of Engineers Process for Requesting a Variance From Vegetation Standards for... requesting a variance from vegetation standards for levees and floodwalls to reflect organizational changes... Construction, Directorate of Civil Works. Policy Guidance Letter--Variance From Vegetation Standards for...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
2010-01-01
... compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH... Special considerations regarding compliance and/or variance. (a) All persons subject to the Animal Welfare... this subpart, except that they may apply for and be granted a variance, 6 by the Deputy...
2012-08-30
... Federal Energy Regulatory Commission Appalachian Power; Notice of Temporary Variance of License and...: Temporary Variance of License. b. Project No: 739-033. c. Date Filed: August 7, 2012. d. Applicant... filed. k. Description of Application: The licensee requests a temporary variance to allow for a...
40 CFR 142.301 - What is a small system variance?
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false What is a small system variance? 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.301 What is a small system variance? Section 1415(e) of the Act authorizes...
40 CFR 142.303 - Which size public water systems can receive a small system variance?
2010-07-01
... receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System General Provisions § 142.303 Which size public water systems can receive a small system variance? (a) A State exercising primary enforcement responsibility for...
Estimating the Variance of the K-Step Ahead Predictor for Time-Series
Tjärnström, Fredrik
1999-01-01
This paper considers the problem of estimating the variance of a linear k-step ahead predictor for time series. (The extension to systems including deterministic inputs is straight forward.) We compare the theoretical results with empirically calculated variance on real data, and discuss the quality of the achieved variance estimate.
31 CFR 15.737-16 - Proof; variance; amendment of pleadings.
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... POST EMPLOYMENT CONFLICT OF INTEREST Administrative Enforcement Proceedings § 15.737-16 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the...
Asymptotic accuracy of the jackknife variance estimator for certain smooth statistics
Gottlieb, Alex D
2001-01-01
We show that that the jackknife variance estimator $v_{jack}$ and the the infinitesimal jackknife variance estimator are asymptotically equivalent if the functional of interest is a smooth function of the mean or a smooth trimmed L-statistic. We calculate the asymptotic variance of $v_{jack}$ for these functionals.
2013-01-15
... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...
29 CFR 1905.6 - Public notice of a granted variance, limitation, variation, tolerance, or exemption.
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Public notice of a granted variance, limitation, variation... SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS... General § 1905.6 Public notice of a granted variance, limitation, variation, tolerance, or...
MEAN SQUARED ERRORS OF BOOTSTRAP VARIANCE ESTIMATORS FOR U-STATISTICS
Mizuno, Masayuki; Maesono, Yoshihiko
2011-01-01
In this paper, we obtain an asymptotic representation of the bootstrap variance estimator for a class of U-statistics. Using the representation of the estimator, we will obtain a mean squared error of the variance estimator until the order n^. Also we compare the bootstrap and the jackknife variance estimators, theoretically.
36 CFR 28.13 - Variance, commercial and industrial application procedures.
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variance, commercial and... Approval of Local Ordinances § 28.13 Variance, commercial and industrial application procedures. (a) The zoning authority shall send the Superintendent a copy of all applications for variances,...
40 CFR 142.305 - When can a small system variance be granted by a State?
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false When can a small system variance be... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.305 When can a small system variance be granted by a...
The pricing of long and short run variance and correlation risk in stock returns
M. Cosemans
2011-01-01
This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk
Estimation of genetic variation in residual variance in female and male broiler chickens
Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.
2009-01-01
In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Hodological resonance, hodological variance, psychosis and schizophrenia: A hypothetical model
Directory of Open Access Journals (Sweden)
Paul Brian eLawrie Birkett
2011-07-01
Full Text Available Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network ("hodological resonance" becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia.
2010-07-01
... Administrator to object to a proposed small system variance or overturn a granted small system variance for a... REGULATIONS IMPLEMENTATION Variances for Small System Epa Review and Approval of Small System Variances § 142.311 What procedures allow the Administrator to object to a proposed small system variance or...
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
Weimin Gao; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduc...
A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;
2014-01-01
-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative......, which results in a high operating cost. For this case, a two-stage extension of the mean-variance approach provides the best trade-off between the expected cost and its variance. It is demonstrated that by using a constraint back-off technique in the specific case study, certainty equivalence EMPC can......Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...
Variance Swaps in BM&F: Pricing and Viability of Hedge
Directory of Open Access Journals (Sweden)
Richard John Brostowicz Junior
2010-07-01
Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.
Estimation models of variance components for farrowing interval in swine
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante Neto
2009-02-01
Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original
Designing electricity generation portfolios using the mean-variance approach
Directory of Open Access Journals (Sweden)
Jorge Cunha
2014-06-01
Full Text Available The use of the mean-variance approach (MVA is well demonstrated in the financial literature for the optimal design of financial assets portfolios. The electricity sector portfolios are also guided by similar objectives, namely maximizing return and minimizing risk. As such, this paper proposes two possible MVA for the design of optimal renewable electricity production portfolios. The first approach is directed to portfolio output maximization and the second one is directed to portfolio cost optimization. The models implementation was achieved from data obtained for each quarter of an hour for a time period close to four years for the Portuguese electricity system. A set of renewable energy sources (RES portfolios was obtained, mixing three RES technologies, namely hydro power, wind power and photovoltaic. This allowed to recognize the seasonality of the resources demonstrating that hydro power output is positively correlated with wind and that photovoltaic is negatively correlated with both hydro and wind. The results showed that for both models the less risky solutions are characterised by a mix of RES technologies, taking advantage of the diversification benefits. As for the highest return solutions, as expected those were the ones with higher risk but the portfolio composition largely depends on the assumed costs of each technology.
Time Variability of Quasars: the Structure Function Variance
MacLeod, C.; Ivezić, Ž.; de Vries, W.; Sesar, B.; Becker, A.
2008-12-01
Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of ~2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity, rest-frame wavelength and time is similar to the scatter induced by the variance of these quantities in the analyzed sample. Hence, our results suggest that, although the statistical properties of quasar variability inferred using two-epoch data capture some underlying physics, there is significant additional information that can be extracted from well-sampled light curves for individual objects.
Waste Isolation Pilot Plant No-Migration Variance Petition
International Nuclear Information System (INIS)
The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA section 3004(d) and 40 CFR section 268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA's NOD and met with the EPA's reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0
Cosmic variance in [O/Fe] in the Galactic disk
de Lis, S Bertran; Majewski, S R; Schiavon, R P; Holtzman, J A; Shetrone, M; Carrera, R; Pérez, A E García; Mészáros, Sz; Frinchaboy, P M; Hearty, F R; Nidever, D L; Zasowski, G; Ge, J
2016-01-01
We examine the distribution of the [O/Fe] abundance ratio in stars across the Galactic disk using H-band spectra from the Apache Point Galactic Evolution Experiment (APOGEE). We minimized systematic errors by considering groups of stars with similar atmospheric parameters. The APOGEE measurements in the Sloan Digital Sky Survey Data Release 12 reveal that the square root of the star-to-star cosmic variance in oxygen at a given metallicity is about 0.03-0.04 dex in both the thin and thick disk. This is about twice as high as the spread found for solar twins in the immediate solar neighborhood and is probably caused by the wider range of galactocentric distances spanned by APOGEE stars. We quantified measurement uncertainties by examining the spread among stars with the same parameters in clusters; these errors are a function of effective temperature and metallicity, ranging between 0.005 dex at 4000 K and solar metallicity, to about 0.03 dex at 4500 K and [Fe/H]= -0.6. We argue that measuring the spread in [O/...
Lung vasculature imaging using speckle variance optical coherence tomography
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
Computational method for reducing variance with Affymetrix microarrays
Directory of Open Access Journals (Sweden)
Brooks Andrew I
2002-08-01
Full Text Available Abstract Background Affymetrix microarrays are used by many laboratories to generate gene expression profiles. Generally, only large differences (> 1.7-fold between conditions have been reported. Computational methods to reduce inter-array variability might be of value when attempting to detect smaller differences. We examined whether inter-array variability could be reduced by using data based on the Affymetrix algorithm for pairwise comparisons between arrays (ratio method rather than data based on the algorithm for analysis of individual arrays (signal method. Six HG-U95A arrays that probed mRNA from young (21–31 yr old human muscle were compared with six arrays that probed mRNA from older (62–77 yr old muscle. Results Differences in mean expression levels of young and old subjects were small, rarely > 1.5-fold. The mean within-group coefficient of variation for 4629 mRNAs expressed in muscle was 20% according to the ratio method and 25% according to the signal method. The ratio method yielded more differences according to t-tests (124 vs. 98 differences at P Conclusion The ratio method reduces inter-array variance and thereby enhances statistical power.
Beyond the GUM: variance-based sensitivity analysis in metrology
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Analysis of variance (ANOVA) models in lower extremity wounds.
Reed, James F
2003-06-01
Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
Cosmological N-body simulations with suppressed variance
Angulo, Raul E.; Pontzen, Andrew
2016-10-01
We present and test a method that dramatically reduces variance arising from the sparse sampling of wavemodes in cosmological simulations. The method uses two simulations which are fixed (the initial Fourier mode amplitudes are fixed to the ensemble average power spectrum) and paired (with initial modes exactly out of phase). We measure the power spectrum, monopole and quadrupole redshift-space correlation functions, halo mass function and reduced bispectrum at z = 1. By these measures, predictions from a fixed pair can be as precise on non-linear scales as an average over 50 traditional simulations. The fixing procedure introduces a non-Gaussian correction to the initial conditions; we give an analytic argument showing why the simulations are still able to predict the mean properties of the Gaussian ensemble. We anticipate that the method will drive down the computational time requirements for accurate large-scale explorations of galaxy bias and clustering statistics, and facilitating the use of numerical simulations in cosmological data interpretation.
Institute of Scientific and Technical Information of China (English)
左宁; 吉芳英; 黄力彦; 宗述安
2009-01-01
针对污泥减量技术中对氮、磷去除能力低的问题,开发了一种具有强化脱氮除磷功能、污泥减量化的HA-A/A-MCO工艺,其通过回流释磷污泥的水解酸化来刺激磷的厌氧释放并辅以外排富磷污水进行化学固定的方式除磷.研究发现:当进入水解酸化池的厌氧释磷污泥量为进水量的2%时,水解产生的VFA导致释磷量达57 mg/L,聚磷菌的生长得到促进而聚糖菌则受到抑制;当控制侧流除磷液量为进水量的13%、化学除磷池出水磷为5 mg/L时,系统处理出水TP<0.5 mg/L;提高厌氧释磷浓度并控制化学除磷池的出水磷浓度为5 mg/L,可以提高化学药剂利用率、减少药剂用量并提高化学污泥的含磷量,HA-A/A-MCO系统产生的化学污泥含磷率高达18%,接近纯含磷化合物的含磷率,可直接用作生产磷肥的原料.%In order to improve phosphorous and nitrogen removal in sludge reduction technologies, an advanced process combining excess sludge reduction and phosphorous and nitrogen removal was devel-oped. It is the hydrolysis acidification-anaerobic/anoxic-muhistep continuous oxic (HA-A/A-MCO) process. It realizes phosphorous removal through the hydrolysis acidification of returned P-release sludge, which improves anaerobic P-release level, and through discharging phosphorus-rich sewage. The results show that when the amount of anaerobic P-release sludge entering the hydrolysis acidification tank is about 2% of the total influent flow, VFA from hydrolysis acidification process is able to induce phosphorus con-tent reaching 57 mg/L from anaerobic release. Meanwhile, the growth of phosphate-accumulating organ-isms (PAOs) in the system is improved greatly while that of glycogen-accumulating organisms (GAOs) is inhibited. When the amount of side-stream phosphorus removal sewage is 13% of the total influent flow, and the effluent phosphorus content of the chemical phosphorus removal tank is 5 mg/L, the effluent TP of the