WorldWideScience

Sample records for density variance mach

  1. Determining integral density distribution in the mach reflection of shock waves

    Science.gov (United States)

    Shevchenko, A. M.; Golubev, M. P.; Pavlov, A. A.; Pavlov, Al. A.; Khotyanovsky, D. V.; Shmakov, A. S.

    2017-05-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  2. Rayleigh Scattering Density Measurements, Cluster Theory, and Nucleation Calculations at Mach 10

    Science.gov (United States)

    Balla, R. Jeffrey; Everhart, Joel L.

    2012-01-01

    In an exploratory investigation, quantitative unclustered laser Rayleigh scattering measurements of density were performed in the air in the NASA Langley Research Center's 31 in. Mach 10 wind tunnel. A review of 20 previous years of data in supersonic and Mach 6 hypersonic flows is presented where clustered signals typically overwhelmed molecular signals. A review of nucleation theory and accompanying nucleation calculations are also provided to interpret the current observed lack of clustering. Data were acquired at a fixed stagnation temperature near 990Kat five stagnation pressures spanning 2.41 to 10.0 MPa (350 to 1454 psi) using a pulsed argon fluoride excimer laser and double-intensified charge-coupled device camera. Data averaged over 371 images and 210 pixels along a 36.7mmline measured freestream densities that agree with computed isentropic-expansion densities to less than 2% and less than 6% at the highest and lowest densities, respectively. Cluster-free Mach 10 results are compared with previous clustered Mach 6 and condensation-free Mach 14 results. Evidence is presented indicating vibrationally excited oxygen and nitrogen molecules are absorbed as the clusters form, release their excess energy, and inhibit or possibly reverse the clustering process. Implications for delaying clustering and condensation onset in hypersonic and hypervelocity facilities are discussed.

  3. Laser produced plasma density measurement by Mach-Zehnder interferometry

    International Nuclear Information System (INIS)

    Vaziri, A.; Kohanzadeh, Y.; Mosavi, R.K.

    1976-06-01

    This report describes an optical interferometric method of measuring the refractive index of the laser-produced plasma, giving estimates of its electron density. The plasma is produced by the interaction of a high power pulsed CO 2 laser beam with a solid target in the vacuum. The time varying plasma has a transient electron density. This transient electron density gives rise to a changing plasma refractive index. A Mach-Zehnder ruby laser interferometer is used to measure this refractive index change

  4. OPACITY BROADENING OF {sup 13}CO LINEWIDTHS AND ITS EFFECT ON THE VARIANCE-SONIC MACH NUMBER RELATION

    Energy Technology Data Exchange (ETDEWEB)

    Correia, C.; De Medeiros, J. R. [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970 (Brazil); Burkhart, B.; Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 North Charter Street, WI 53711 (United States); Ossenkopf, V.; Stutzki, J. [Physikalisches Institut der Universität zu Köln, Zülpicher Strasse 77, D-50937 Köln (Germany); Kainulainen, J. [Max-Planck-Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Kowal, G., E-mail: caioftc@dfte.ufrn.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Universidade de São Paulo, 05508-090 (Brazil)

    2014-04-10

    We study how the estimation of the sonic Mach number (M{sub s} ) from {sup 13}CO linewidths relates to the actual three-dimensional sonic Mach number. For this purpose we analyze MHD simulations that include post-processing to take radiative transfer effects into account. As expected, we find very good agreement between the linewidth estimated sonic Mach number and the actual sonic Mach number of the simulations for optically thin tracers. However, we find that opacity broadening causes M{sub s} to be overestimated by a factor of ≈1.16-1.3 when calculated from optically thick {sup 13}CO lines. We also find that there is a dependence on the magnetic field: super-Alfvénic turbulence shows increased line broadening compared with sub-Alfvénic turbulence for all values of optical depth for supersonic turbulence. Our results have implications for the observationally derived sonic Mach number-density standard deviation (σ{sub ρ/(ρ)}) relationship, σ{sub ρ/〈ρ〉}{sup 2}=b{sup 2}M{sub s}{sup 2}, and the related column density standard deviation (σ {sub N/(N)}) sonic Mach number relationship. In particular, we find that the parameter b, as an indicator of solenoidal versus compressive driving, will be underestimated as a result of opacity broadening. We compare the σ {sub N/(N)}-M{sub s} relation derived from synthetic dust extinction maps and {sup 13}CO linewidths with recent observational studies and find that solenoidally driven MHD turbulence simulations have values of σ {sub N/(N)}which are lower than real molecular clouds. This may be due to the influence of self-gravity which should be included in simulations of molecular cloud dynamics.

  5. Air Density Measurements in a Mach 10 Wake Using Iodine Cordes Bands

    Science.gov (United States)

    Balla, Robert J.; Everhart, Joel L.

    2012-01-01

    An exploratory study designed to examine the viability of making air density measurements in a Mach 10 flow using laser-induced fluorescence of the iodine Cordes bands is presented. Experiments are performed in the NASA Langley Research Center 31 in. Mach 10 air wind tunnel in the hypersonic near wake of a multipurpose crew vehicle model. To introduce iodine into the wake, a 0.5% iodine/nitrogen mixture is seeded using a pressure tap at the rear of the model. Air density was measured at 56 points along a 7 mm line and three stagnation pressures of 6.21, 8.62, and 10.0 MPa (900, 1250, and 1450 psi). Average results over time and space show rho(sub wake)/rho(sub freestream) of 0.145 plus or minus 0.010, independent of freestream air density. Average off-body results over time and space agree to better than 7.5% with computed densities from onbody pressure measurements. Densities measured during a single 60 s run at 10.0 MPa are time-dependent and steadily decrease by 15%. This decrease is attributed to model forebody heating by the flow.

  6. Relationship between turbulence energy and density variance in the solar neighbourhood molecular clouds

    Science.gov (United States)

    Kainulainen, J.; Federrath, C.

    2017-11-01

    The relationship between turbulence energy and gas density variance is a fundamental prediction for turbulence-dominated media and is commonly used in analytic models of star formation. We determine this relationship for 15 molecular clouds in the solar neighbourhood. We use the line widths of the CO molecule as the probe of the turbulence energy (sonic Mach number, ℳs) and three-dimensional models to reconstruct the density probability distribution function (ρ-PDF) of the clouds, derived using near-infrared extinction and Herschel dust emission data, as the probe of the density variance (σs). We find no significant correlation between ℳs and σs among the studied clouds, but we cannot rule out a weak correlation either. In the context of turbulence-dominated gas, the range of the ℳs and σs values corresponds to the model predictions. The data cannot constrain whether the turbulence-driving parameter, b, and/or thermal-to-magnetic pressure ratio, β, vary among the sample clouds. Most clouds are not in agreement with field strengths stronger than given by β ≲ 0.05. A model with b2β/ (β + 1) = 0.30 ± 0.06 provides an adequate fit to the cloud sample as a whole. Based on the average behaviour of the sample, we can rule out three regimes: (i) strong compression combined with a weak magnetic field (b ≳ 0.7 and β ≳ 3); (ii) weak compression (b ≲ 0.35); and (iii) a strong magnetic field (β ≲ 0.1). When we include independent magnetic field strength estimates in the analysis, the data rule out solenoidal driving (b < 0.4) for the majority of the solar neighbourhood clouds. However, most clouds have b parameters larger than unity, which indicates a discrepancy with the turbulence-dominated picture; we discuss the possible reasons for this.

  7. Density Measurement of Compact Toroid with Mach-Zehnder Interferometer

    Science.gov (United States)

    Laufman-Wollitzer, Lauren; Endrizzi, Doug; Brookhart, Matt; Flanagan, Ken; Forest, Cary

    2016-10-01

    Utilizing a magnetized coaxial plasma gun (MCPG) built by Tri Alpha Energy, a dense compact toroid (CT) is created and injected at high speed into the Wisconsin Plasma Astrophysics Laboratory (WiPAL) vessel. A modified Mach-Zehnder interferometer from the Line-Tied Reconnection Experiment (LTRX) provides an absolute measurement of electron density. The interferometer is located such that the beam intersects the plasma across the diameter of the MCPG drift region before the CT enters the vessel. This placement ensures that the measurement is taken before the CT expand. Results presented will be used to further analyze characteristics of the CT. Funding provided by DoE, NSF, and WISE Summer Research.

  8. Photodensitometric tracing of Mach bands and its significance

    International Nuclear Information System (INIS)

    Yoo, Shi Joon; Cho, Kyung Sik; Kang, Heung Sik; Cho, Byung Jae

    1984-01-01

    Mach bands, a visual phenomenon resulting from lateral inhibitory impulses in the retina, are recognized as lucent or dense lines at the borders of different radiographic densities. A number of clinical situations have been described in which Mach bands may cause difficulty in radiographic diagnosis. Photodensitometric measurement of the film can differentiate the true change in film density from the Mach band which is an optical illusion. Authors present several examples of photodensitometric tracings of Mach bands, with the brief review of the mechanism of their production

  9. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  10. THE DENSITY DISTRIBUTION IN TURBULENT BISTABLE FLOWS

    International Nuclear Information System (INIS)

    Gazol, Adriana; Kim, Jongsoo

    2013-01-01

    We numerically study the volume density probability distribution function (n-PDF) and the column density probability distribution function (Σ-PDF) resulting from thermally bistable turbulent flows. We analyze three-dimensional hydrodynamic models in periodic boxes of 100 pc by side, where turbulence is driven in the Fourier space at a wavenumber corresponding to 50 pc. At low densities (n ∼ –3 ), the n-PDF is well described by a lognormal distribution for an average local Mach number ranging from ∼0.2 to ∼5.5. As a consequence of the nonlinear development of thermal instability (TI), the logarithmic variance of the distribution of the diffuse gas increases with M faster than in the well-known isothermal case. The average local Mach number for the dense gas (n ∼> 7.1 cm –3 ) goes from ∼1.1 to ∼16.9 and the shape of the high-density zone of the n-PDF changes from a power law at low Mach numbers to a lognormal at high M values. In the latter case, the width of the distribution is smaller than in the isothermal case and grows slower with M. At high column densities, the Σ-PDF is well described by a lognormal for all of the Mach numbers we consider and, due to the presence of TI, the width of the distribution is systematically larger than in the isothermal case but follows a qualitatively similar behavior as M increases. Although a relationship between the width of the distribution and M can be found for each one of the cases mentioned above, these relations are different from those of the isothermal case.

  11. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  12. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  13. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  14. Turbulent mixing of a slightly supercritical van der Waals fluid at low-Mach number

    International Nuclear Information System (INIS)

    Battista, F.; Casciola, C. M.; Picano, F.

    2014-01-01

    Supercritical fluids near the critical point are characterized by liquid-like densities and gas-like transport properties. These features are purposely exploited in different contexts ranging from natural products extraction/fractionation to aerospace propulsion. Large part of studies concerns this last context, focusing on the dynamics of supercritical fluids at high Mach number where compressibility and thermodynamics strictly interact. Despite the widespread use also at low Mach number, the turbulent mixing properties of slightly supercritical fluids have still not investigated in detail in this regime. This topic is addressed here by dealing with Direct Numerical Simulations of a coaxial jet of a slightly supercritical van der Waals fluid. Since acoustic effects are irrelevant in the low Mach number conditions found in many industrial applications, the numerical model is based on a suitable low-Mach number expansion of the governing equation. According to experimental observations, the weakly supercritical regime is characterized by the formation of finger-like structures – the so-called ligaments – in the shear layers separating the two streams. The mechanism of ligament formation at vanishing Mach number is extracted from the simulations and a detailed statistical characterization is provided. Ligaments always form whenever a high density contrast occurs, independently of real or perfect gas behaviors. The difference between real and perfect gas conditions is found in the ligament small-scale structure. More intense density gradients and thinner interfaces characterize the near critical fluid in comparison with the smoother behavior of the perfect gas. A phenomenological interpretation is here provided on the basis of the real gas thermodynamics properties

  15. Turbulent mixing of a slightly supercritical van der Waals fluid at low-Mach number

    Energy Technology Data Exchange (ETDEWEB)

    Battista, F.; Casciola, C. M. [Department of Mechanical and Aerospace Engineering, Sapienza University, via Eudossiana 18, 00184 Rome (Italy); Picano, F. [Department of Industrial Engineering, University of Padova, via Venezia 1, 35131 Padova (Italy)

    2014-05-15

    Supercritical fluids near the critical point are characterized by liquid-like densities and gas-like transport properties. These features are purposely exploited in different contexts ranging from natural products extraction/fractionation to aerospace propulsion. Large part of studies concerns this last context, focusing on the dynamics of supercritical fluids at high Mach number where compressibility and thermodynamics strictly interact. Despite the widespread use also at low Mach number, the turbulent mixing properties of slightly supercritical fluids have still not investigated in detail in this regime. This topic is addressed here by dealing with Direct Numerical Simulations of a coaxial jet of a slightly supercritical van der Waals fluid. Since acoustic effects are irrelevant in the low Mach number conditions found in many industrial applications, the numerical model is based on a suitable low-Mach number expansion of the governing equation. According to experimental observations, the weakly supercritical regime is characterized by the formation of finger-like structures – the so-called ligaments – in the shear layers separating the two streams. The mechanism of ligament formation at vanishing Mach number is extracted from the simulations and a detailed statistical characterization is provided. Ligaments always form whenever a high density contrast occurs, independently of real or perfect gas behaviors. The difference between real and perfect gas conditions is found in the ligament small-scale structure. More intense density gradients and thinner interfaces characterize the near critical fluid in comparison with the smoother behavior of the perfect gas. A phenomenological interpretation is here provided on the basis of the real gas thermodynamics properties.

  16. Diffusive wave in the low Mach limit for non-viscous and heat-conductive gas

    Science.gov (United States)

    Liu, Yechi

    2018-06-01

    The low Mach number limit for one-dimensional non-isentropic compressible Navier-Stokes system without viscosity is investigated, where the density and temperature have different asymptotic states at far fields. It is proved that the solution of the system converges to a nonlinear diffusion wave globally in time as Mach number goes to zero. It is remarked that the velocity of diffusion wave is proportional with the variation of temperature. Furthermore, it is shown that the solution of compressible Navier-Stokes system also has the same phenomenon when Mach number is suitably small.

  17. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Number

    Science.gov (United States)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.

    2004-01-01

    Present simulation of turbulent flows involving shock wave/boundary layer interaction invariably overestimates heat flux by almost a factor of two. One possible reason for such a performance is a result of the fact that the turbulence models employed make use of Morkovin's hypothesis. This hypothesis is valid for non-hypersonic Mach numbers and moderate rates of heat transfer. At hypersonic Mach numbers, high rates of heat transfer exist in regions where shock wave/boundary layer interactions are important. As a result, one should not expect traditional turbulence models to yield accurate results. The goal of this investigation is to explore the role of a variable Prandtl number formulation in predicting heat flux in flows dominated by strong shock wave/boundary layer interactions. The intended applications involve external flows in the absence of combustion such as those encountered in supersonic inlets. This can be achieved by adding equations for the temperature variance and its dissipation rate. Such equations can be derived from the exact Navier-Stokes equations. Traditionally, modeled equations are based on the low speed energy equation where the pressure gradient term and the term responsible for energy dissipation are ignored. It is clear that such assumptions are not valid for hypersonic flows. The approach used here is based on the procedure used in deriving the k-zeta model, in which the exact equations that governed k, the variance of velocity, and zeta, the variance of vorticity, were derived and modeled. For the variable turbulent Prandtl number, the exact equations that govern the temperature variance and its dissipation rate are derived and modeled term by term. The resulting set of equations are free of damping and wall functions and are coordinate-system independent. Moreover, modeled correlations are tensorially consistent and invariant under Galilean transformation. The final set of equations will be given in the paper.

  18. Role of Turbulent Prandtl Number on Heat Flux at Hypersonic Mach Numbers

    Science.gov (United States)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.; Gaffney, R. L., Jr.

    2007-01-01

    A new turbulence model suited for calculating the turbulent Prandtl number as part of the solution is presented. The model is based on a set of two equations: one governing the variance of the enthalpy and the other governing its dissipation rate. These equations were derived from the exact energy equation and thus take into consideration compressibility and dissipation terms. The model is used to study two cases involving shock wave/boundary layer interaction at Mach 9.22 and Mach 5.0. In general, heat transfer prediction showed great improvement over traditional turbulence models where the turbulent Prandtl number is assumed constant. It is concluded that using a model that calculates the turbulent Prandtl number as part of the solution is the key to bridging the gap between theory and experiment for flows dominated by shock wave/boundary layer interactions.

  19. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  20. Mathematical and numerical aspects of low mach number flows

    Energy Technology Data Exchange (ETDEWEB)

    Schochet, St.; Bresch, D.; Grenier, E.; Alazard, T.; Gordner, A.; Sankaran, V.; Massot, M.; Sery, R.; Pebay, P.; Lunch, O.; Mazhorova, O.; Turkel, O.E.; Faille, I.; Danchin, R.; Allain, O.; Birken, P.; Lafitte, O.; Kloczko, T.; Frick, W.; Bui, T.; Dellacherie, S.; Klein, R.; Roe, Ph.; Accary, G.; Braack, M.; Picano, F.; Cadiou, A.; Dinescu, C.; Lesage, A.C.; Wesseling, P.; Heuveline, V.; Jobelin, M.; Weisman, C.; Merkle, C.

    2004-07-01

    diphasic system (S. DELLACHERIE); a preconditioning technique for biphasic flows with interfaces (C. DINESCU, B. LEONARD, C. HIRSCH); two models for the simulation of multiphase flows in oil and gas pipelines (I. FAILLE); physics and insects require compressible low Mach number flow (W. FRICK); multigrid for low mach number flows including acoustic modes (A. GORDNER); adaptive finite element method for low mach number flows (V. HEUVELINE); using multiple scales asymptotics in the construction of low Mach number numerics (R. KLEIN); a matrix-free implicit method for flows at all speeds (T. KLOCZKO, A. BECCANTINI, C. CORRE); linear growth rate for the quasi-isobaric ablation front model of Kull-Anisimov (O. LAFITTE); augmented projection methods for incompressible and dilatable flows (J. CLATCHE, M. JOBELIN, C. LAPUERTA, P. ANGOT, B. PIAR); a numerical accuracy study for level set formulations (A.C. LESAGE, O. ALLAIN, A. DERVIEUX) 3D computer simulation of convective instability in the multicomponent solution (O. MAZHOROVA, V. KOLMYCHKOV, Y. POPOV, P. BONTOUX, M. El GANAOUI); multicomponent reactive flows: symmetrization and the low Mach number limit (M. MASSOT, V. GIOVANGIGLI); computation of low mach number flows with a generalized Gibbs relation (C.L. MERKLE, V. SANKARAN, D. LI); a Mach-uniform pressure correction algorithm (K. NERINCK, J. VIERENDEELS, E. DICK); application of Turkel preconditioning method in external free convection and incompressible flows (T.H. NGUYEN-BUI, B. DUBROCA, P.H. MAIRE); a half-explicit, non-split projection method for low mach number flows (P.P. PEBAY, H. N. NAJIM, J. POUSIN); combustion in low Mach number isotropic turbulence (F. PICANO, P. GUALTIERI, B. FAVINI); calculation of low Mach number acoustics: a comparison of MPV, EIF and linearized Euler equations (S. ROLLER, T. SCHWARTKOFF, M. DUMBSER, C.D. MUNZ) comparison of pressure-based and density-based methods for low Mach number CFD computations (V. SANKARAN, C. MERKLE); the

  1. Mach's principle and rotating universes

    International Nuclear Information System (INIS)

    King, D.H.

    1990-01-01

    It is shown that the Bianchi 9 model universe satisfies the Mach principle. These closed rotating universes were previously thought to be counter-examples to the principle. The Mach principle is satisfied because the angular momentum of the rotating matter is compensated by the effective angular momentum of gravitational waves. A new formulation of the Mach principle is given that is based on the field theory interpretation of general relativity. Every closed universe with 3-sphere topology is shown to satisfy this formulation of the Mach principle. It is shown that the total angular momentum of the matter and gravitational waves in a closed 3-sphere topology universe is zero

  2. A Parametric Study of a Constant-Mach-Number MHD Generator with Nuclear Ionization

    International Nuclear Information System (INIS)

    Braun, J.

    1965-03-01

    The influence of electrical and gas dynamical parameters on the length, of a linear constant-Mach-number MHD duct has been investigated. The gas has been assumed to be ionized by neutron irradiation in the expansion nozzle preceding the MHD duct. Inside the duct the electron recombination is assumed to be governed, by volume recombination. It is found that there exists a distinct domain from which the parameters must be chosen, pressure and Mach number being the most critical ones. If power densities in the order of magnitude 100 MW/m 3 are desired, high magnetic fields and Mach numbers in the supersonic range are needed. The influence of the variation of critical parameters on the channel length is given as a product of simple functions, each containing one parameter

  3. A Parametric Study of a Constant-Mach-Number MHD Generator with Nuclear Ionization

    Energy Technology Data Exchange (ETDEWEB)

    Braun, J

    1965-03-15

    The influence of electrical and gas dynamical parameters on the length, of a linear constant-Mach-number MHD duct has been investigated. The gas has been assumed to be ionized by neutron irradiation in the expansion nozzle preceding the MHD duct. Inside the duct the electron recombination is assumed to be governed, by volume recombination. It is found that there exists a distinct domain from which the parameters must be chosen, pressure and Mach number being the most critical ones. If power densities in the order of magnitude 100 MW/m{sup 3} are desired, high magnetic fields and Mach numbers in the supersonic range are needed. The influence of the variation of critical parameters on the channel length is given as a product of simple functions, each containing one parameter.

  4. Mach cones in space and laboratory dusty magnetoplasmas

    International Nuclear Information System (INIS)

    Mamun, A.A.; Shukla, P.K

    2004-07-01

    We present a rigorous theoretical investigation on the possibility for the formation of Mach cones in both space and laboratory dusty magnetoplasmas. We find the parametric regimes for which different types of Mach cones, such as dust acoustic Mach cones, dust magneto-acoustic Mach cones, oscillonic Mach cones, etc. are formed in space and laboratory dusty magnetoplasmas. We also identify the basic features of such different classes of Mach cones (viz. dust- acoustic, dust magneto-acoustic, oscillonic Mach cones, etc.), and clearly explain how they are relevant to space and laboratory dusty manetoplasmas. (author)

  5. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  6. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  7. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  8. Does the chromatic Mach bands effect exist?

    Science.gov (United States)

    Tsofe, Avital; Spitzer, Hedva; Einav, Shmuel

    2009-06-30

    The achromatic Mach bands effect is a well-known visual illusion, discovered over a hundred years ago. This effect has been investigated thoroughly, mainly for its brightness aspect. The existence of Chromatic Mach bands, however, has been disputed. In recent years it has been reported that Chromatic Mach bands are not perceived under controlled iso-luminance conditions. However, here we show that a variety of Chromatic Mach bands, consisting of chromatic and achromatic regions, separated by a saturation ramp, can be clearly perceived under iso-luminance and iso-brightness conditions. In this study, observers' eye movements were recorded under iso-brightness conditions. Several observers were tested for their ability to perceive the Chromatic Mach bands effect and its magnitude, across different cardinal and non-cardinal Chromatic Mach bands stimuli. A computational model of color adaptation, which predicted color induction and color constancy, successfully predicts this variation of Chromatic Mach bands. This has been tested by measuring the distance of the data points from the "achromatic point" and by calculating the shift of the data points from predicted complementary lines. The results suggest that the Chromatic Mach bands effect is a specific chromatic induction effect.

  9. Low Mach-number collisionless electrostatic shocks and associated ion acceleration

    Science.gov (United States)

    Pusztai, I.; TenBarge, J. M.; Csapó, A. N.; Juno, J.; Hakim, A.; Yi, L.; Fülöp, T.

    2018-03-01

    The existence and properties of low Mach-number (M≳ 1) electrostatic collisionless shocks are investigated with a semi-analytical solution for the shock structure. We show that the properties of the shock obtained in the semi-analytical model can be well reproduced in fully kinetic Eulerian Vlasov-Poisson simulations, where the shock is generated by the decay of an initial density discontinuity. Using this semi-analytical model, we study the effect of the electron-to-ion temperature ratio and the presence of impurities on both the maximum shock potential and the Mach number. We find that even a small amount of impurities can influence the shock properties significantly, including the reflected light ion fraction, which can change several orders of magnitude. Electrostatic shocks in heavy ion plasmas reflect most of the hydrogen impurity ions.

  10. Measurements of flows in the DIII-D divertor by Mach probes

    International Nuclear Information System (INIS)

    Boedo, J.A.; Lehmer, R.; Moyer, R.A.; Watkins, J.G.; Porter, G.D.; Evans, T.E.; Leonard, A.W.; Schaffer, M.J.

    1998-06-01

    First measurements of Mach number of background plasma in the DIII-D divertor are presented in conjunction with temperature T e and density n e using a fast scanning probe array. To validate the probe measurements, the authors compared the T e , n e and J sat data to Thomson scattering data and find good overall agreement in attached discharges and some discrepancy for T e and n e in detached discharges. The discrepancy is mostly due to the effect of large fluctuations present during detached plasmas on the probe characteristic; the particle flux is accurately measured in every case. A composite 2-D map of measured flows is presented for an ELMing H-mode discharge and they focus on some of the details. They have also documented the temperature, density and Mach number in the private flux region of the divertor and the vicinity of the X-point, which are important transition regions that have been little studied or modeled. Background parallel plasma flows and electric fields in the divertor region show a complex structure

  11. Measurements of low density, high velocity flow by electron beam fluorescence technique

    International Nuclear Information System (INIS)

    Soga, Takeo; Takanishi, Masaya; Yasuhara, Michiru

    1981-01-01

    A low density chamber with an electron gun system was made for the measurements of low density, high velocity (high Mach number) flow. This apparatus is a continuous running facility. The number density and the rotational temperature in the underexpanding free jet of nitrogen were measured along the axis of the jet by the electron beam fluorescence technique. The measurements were carried out from the vicinity of the exit of the jet to far downstream of the first Mach disk. Rotational nonequilibrium phenomena were observed in the hypersonic flow field as well as in the shock wave (Mach disk). (author)

  12. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  13. Axisymmetric vortex method for low-Mach number, diffusion-controlled combustion

    CERN Document Server

    Lakkis, I

    2003-01-01

    A grid-free, Lagrangian method for the accurate simulation of low-Mach number, variable-density, diffusion-controlled reacting flow is presented. A fast-chemistry model in which the conversion rate of reactants to products is limited by the local mixing rate is assumed in order to reduce the combustion problem to the solution of a convection-diffusion-generation equation with volumetric expansion and vorticity generation at the reaction fronts. The solutions of the continuity and vorticity equations, and the equations governing the transport of species and energy, are obtained using a formulation in which particles transport conserved quantities by convection and diffusion. The dynamic impact of exothermic combustion is captured through accurate integration of source terms in the vorticity transport equations at the location of the particles, and the extra velocity field associated with volumetric expansion at low Mach number computed to enforced mass conservation. The formulation is obtained for an axisymmet...

  14. Ernst Mach a deeper look : documents and new perspectives

    CERN Document Server

    1992-01-01

    Ernst Mach -- A Deeper Look has been written to reveal to English-speaking readers the recent revival of interest in Ernst Mach in Europe and Japan. The book is a storehouse of new information on Mach as a philosopher, historian, scientist and person, containing a number of biographical and philosophical manuscripts publihsed for the first time, along with correspondence and other matters published for the first time in English. The book also provides English translations of Mach's controversies with leading physicists and psychologists, such as Max Planck and Carl Stumpf, and offers basic evidence for resolving Mach's position on atomism and Einstein's theory of relativity. Mach's scientific, philosophical and personal influence in a number of countries -- Austria, Germany, Bohemia and Yugoslavia among them -- has been carefully explored and many aspects detailed for the first time. All of the articles are eminently readable, especially those written by Mach's sister. They are deeply researched, new interpre...

  15. Gyro precession and Mach's principle

    International Nuclear Information System (INIS)

    Eby, P.

    1979-01-01

    The precession of a gyroscope is calculated in a nonrelativistic theory due to Barbour which satisfies Mach's principle. It is shown that the theory predicts both the geodetic and motional precession of general relativity to within factors of order 1. The significance of the gyro experiment is discussed from the point of view of metric theories of gravity and this is contrasted with its significance from the point of view of Mach's principle. (author)

  16. Mach's principle and space-time structure

    International Nuclear Information System (INIS)

    Raine, D.J.

    1981-01-01

    Mach's principle, that inertial forces should be generated by the motion of a body relative to the bulk of matter in the universe, is shown to be related to the structure imposed on space-time by dynamical theories. General relativity theory and Mach's principle are both shown to be well supported by observations. Since Mach's principle is not contained in general relativity this leads to a discussion of attempts to derive Machian theories. The most promising of these appears to be a selection rule for solutions of the general relativistic field equations, in which the space-time metric structure is generated by the matter content of the universe only in a well-defined way. (author)

  17. The intellectual quadrangle: Mach-Boltzmann-Planck-Einstein

    International Nuclear Information System (INIS)

    Broda, E.

    1981-01-01

    These four men were influential in the transition from classical to modern physics. They interacted as scientists, often antagonistically. Thus Boltzmann was the greatest champion of the atom, while Mach remained unconvinced all his life. As a aphysicist, Einstein was greatly influenced by both Mach and Boltzmann, although Mach in the end rejected relativity as well. Because of his work on statistical mechanics, fluctuations, and quantum theory, Einstein has been called the natural successor to Boltzmann. Planck also was influenced by Mach at first. Hence he and Boltzmann were adversaries antil Planck converted to atomistics in 1900 and used the statistical interpretation of entropy to establish his radiation law. Planck accepted relativity early, but in quantum theory he was for a long time partly opposed to Einstein, and vice versa - Einstein considered Planck's derivation of his radiation law as unsound, while Planck could not accept the light quantum. In the case of all four physicists, science was interwoven with philosophy. Boltzmann consistently fought Mach's positivism, while Planck and Einstein moved from positivism to realism. All were also, though in very different ways, actively interested in public affairs. (orig.)

  18. A fast spatial scanning combination emissive and mach probe for edge plasma diagnosis

    International Nuclear Information System (INIS)

    Lehmer, R.D.; LaBombard, B.; Conn, R.W.

    1989-04-01

    A fast spatially scanning emissive and mach probe has been developed for the measurement of plasma profiles in the PISCES facility at UCLA. A pneumatic cylinder is used to drive a multiple tip probe along a 15cm stroke in less than 400msec, giving single shot profiles while limiting power deposition to the probe. A differentially pumped sliding O-ring seal allows the probe to be moved between shots to infer two and three dimensional profiles. The probe system has been used to investigate the plasma potential, density, and parallel mach number profiles of the presheath induced by a wall surface and scrape-off-layer profile modifications in biased limiter simulation experiments. Details of the hardware, data acquisition electronics, and tests of probe reliability are discussed. 30 refs., 24 figs

  19. Background-oriented schlieren imaging of flow around a circular cylinder at low Mach numbers

    Science.gov (United States)

    Stadler, Hannes; Bauknecht, André; Siegrist, Silvan; Flesch, Robert; Wolf, C. Christian; van Hinsberg, Nils; Jacobs, Markus

    2017-09-01

    The background-oriented schlieren (BOS) imaging method has, for the first time, been applied in the investigation of the flow around a circular cylinder at low Mach numbers (Msuccessive imaging at incremental angular positions around the cylinder. This density distribution has been found to agree well with the pressure measurements and with potential theory where appropriate.

  20. Laser-driven Mach waves for gigabar-range shock experiments

    Science.gov (United States)

    Swift, Damian; Lazicki, Amy; Coppari, Federica; Saunders, Alison; Nilsen, Joseph

    2017-10-01

    Mach reflection offers possibilities for generating planar, supported shocks at higher pressures than are practical even with laser ablation. We have studied the formation of Mach waves by algebraic solution and hydrocode simulation for drive pressures at much than reported previously, and for realistic equations of state. We predict that Mach reflection continues to occur as the drive pressure increases, and the pressure enhancement increases monotonically with drive pressure even though the ``enhancement spike'' characteristic of low-pressure Mach waves disappears. The growth angle also increases monotonically with pressure, so a higher drive pressure seems always to be an advantage. However, there are conditions where the Mach wave is perturbed by reflections. We have performed trial experiments at the Omega facility, using a laser-heated halfraum to induce a Mach wave in a polystyrene cone. Pulse length and energy limitations meant that the drive was not maintained long enough to fully support the shock, but the results indicated a Mach wave of 25-30 TPa from a drive pressure of 5-6 TPa, consistent with simulations. A similar configuration should be tested at the NIF, and a Z-pinch driven configuration may be possible. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Mach's predictions and relativistic cosmology

    International Nuclear Information System (INIS)

    Heller, M.

    1989-01-01

    Deep methodological insight of Ernst Mach into the structure of the Newtonian mechanics allowed him to ask questions, the importance of which can be appreciated only today. Three such Mach's ''predictions'' are briefly presented, namely: the possibility of the existence of an allpervading medium which could serve as an universal frame of reference and which has actually been discovered in the form of the microwave background radiation, a certain ''smoothness'' of the Universe which is now recognized as the Robertson-Walker symmetries and the possibility of the experimental verification of the mass anisotropy. 11 refs. (author)

  2. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  3. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  4. Application of a transitional boundary-layer theory in the low hypersonic Mach number regime

    Science.gov (United States)

    Shamroth, S. J.; Mcdonald, H.

    1975-01-01

    An investigation is made to assess the capability of a finite-difference boundary-layer procedure to predict the mean profile development across a transition from laminar to turbulent flow in the low hypersonic Mach-number regime. The boundary-layer procedure uses an integral form of the turbulence kinetic-energy equation to govern the development of the Reynolds apparent shear stress. The present investigation shows the ability of this procedure to predict Stanton number, velocity profiles, and density profiles through the transition region and, in addition, to predict the effect of wall cooling and Mach number on transition Reynolds number. The contribution of the pressure-dilatation term to the energy balance is examined and it is suggested that transition can be initiated by the direct absorption of acoustic energy even if only a small amount (1 per cent) of the incident acoustic energy is absorbed.

  5. The cosmological constant and Pioneer anomaly from Weyl spacetimes and Mach's principle

    International Nuclear Information System (INIS)

    Castro, Carlos

    2009-01-01

    It is shown how Weyl's geometry and Mach's holographic principle furnishes both the magnitude and sign (towards the sun) of the Pioneer anomalous acceleration a P ∼-c 2 /R Hubble firstly observed by Anderson et al. Weyl's geometry can account for both the origins and the value of the observed vacuum energy density (dark energy). The source of dark energy is just the dilaton-like Jordan-Brans-Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. A nonvanishing value of the vacuum energy density of the order of 10 -123 M Planck 4 is found consistent with observations. Weyl's geometry accounts also for the phantom scalar field in modern Cosmology in a very natural fashion.

  6. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    Science.gov (United States)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  7. Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe

    Science.gov (United States)

    Essén, Hanno

    2014-08-01

    Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.

  8. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    Science.gov (United States)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  9. Working with Instruments: Ernst Mach as Material Epistemologist, a Short Introduction.

    Science.gov (United States)

    Hoffmann, Christoph; Métraux, Alexandre

    2016-12-01

    With the death of Ernst Mach on February 19, 1916, one day after his seventy-eighth birthday, a question finally became explicit that had been looming for some time. It was as simple as it was fundamental: who, in the end, was this man, a scientist or a philosopher? The importance of this question for contemporaries can easily be gleaned from the obituaries that appeared in the weeks following Mach's death: one in the Physikalische Zeitschrift, written by Albert Einstein, and another in the Archiv für die Geschichte der Philosophie, written by Mach's former student Heinrich Gomperz. They both addressed this critical issue in plain words. Einstein stressed that Mach "was not a philosopher who chose the natural sciences as the object of his speculation, but a many-sided, interested, diligent scientist who also took visible pleasure in detailed questions outside the burning issues of general interest" (Einstein 1916, 104; translation cited in Blackmore 1992, 158). Gomperz in turn first emphasized the great loss science had experienced with Mach's death, asking subsequently whether "the suffering science is physics or philosophy?" (Gomperz 1916, 321). His answer broadly followed Einstein's conclusion; relying on Mach's own words, he reminded his readers that Mach never claimed to be a philosopher, but merely was looking for a viewpoint that transcended the disciplinary constraints of particular scientific activities.

  10. [Investigation of Empiricism. On Ernst Mach's Conception of the Thought Experiment].

    Science.gov (United States)

    Krauthausen, Karin

    2015-03-01

    Investigation of Empiricism. On Ernst Mach's Conception of the Thought Experiment. The paper argues that Ernst Mach's conception of the thought experiment from 1897/1905 holds a singular position in the lively discussions and repeated theorizations that have continued up to the present in relation to this procedure. Mach derives the thought experiment from scientific practice, and does not oppose it to the physical experiment, but, on the contrary, endows it with a robust relation to the facts. For Mach, the thought experiment is a reliable means of determining empiricism, and at the same time a real, because open and unbiased, experimenting. To shed light on this approach, the paper carries out a close reading of the relevant texts in Mach's body of writings (in their different stages of revision) and proceeds in three steps: first, Mach's processual understanding of science will be presented, which also characterizes his research and publication practice (I. 'Aperçu' and 'Sketch'. Science as Process and Projection); then in a second step the physiological and biological justification and valorization of memory and association will be examined with which Mach limits the relevance of categories such as consciousness and will (II. The Biology of Consciousness. Or The Polyp Colony); against this background, thirdly, the specific empiricism can be revealed that Mach inscribes into the thought experiment by on the one hand founding it in the memory and association, and on the other by tracing it back to geometry, which he deploys as an experimenting oriented to experience (III. Thinking and Experience. The Thought Experiment). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Robert Musil versus Ernst Mach

    Directory of Open Access Journals (Sweden)

    Jalón, Mauricio

    2010-06-01

    Full Text Available On Mach’s Theories (DT of R. Musil rejects that the scientific representation tends to build a clear and complete inventory of facts. Mach finds himself obliged to presuppose constant relationships in nature; but this regularity of phenomena implies that the law is something more than a «table», that its mere dependencies are pushed into the background, and that a theoretical relationship in Physics is much more than an order relationship. His conception of scientific economy as a «natural adaptation» implies a biological monism opposed to the characteristic dualities of an empiricist.

    Sobre las teorías de Mach (TD de R. Musil rebate que la representación científica tienda a construir un claro y completo inventario de hechos. Pues Mach se ve obligado a presuponer relaciones constantes en la naturaleza; pero esta regularidad de los fenómenos implica que la ley es algo más que cierto «cuadro», que las meras dependencias que defiende están en un segundo plano y que una relación teórica en física es mucho más que una relación de orden. Su concepción de la economía científica como «adaptación natural» significa un monismo biológico opuesto a las dualidades propias de un empirista.

  12. Variation with Mach Number of Static and Total Pressures Through Various Screens

    Science.gov (United States)

    Adler, Alfred A

    1946-01-01

    Tests were conducted in the Langley 24-inch highspeed tunnel to ascertain the static-pressure and total-pressure losses through screens ranging in mesh from 3 to 12 wires per inch and in wire diameter from 0.023 to 0.041 inch. Data were obtained from a Mach number of approximately 0.20 up to the maximum (choking) Mach number obtainable for each screen. The results of this investigation indicate that the pressure losses increase with increasing Mach number until the choking Mach number, which can be computed, is reached. Since choking imposes a restriction on the mass rate of flow and maximum losses are incurred at this condition, great care must be taken in selecting the screen mesh and wire dimmeter for an installation so that the choking Mach number is

  13. Germanium on silicon mid-infrared waveguides and Mach-Zehnder interferometers

    NARCIS (Netherlands)

    Malik, A.; Muneeb, M.; Shimura, Y.; Campenhout, van J.; Loo, van de R.; Roelkens, G.C.

    2013-01-01

    In this paper we describe Ge-on-Si waveguides and Mach-Zehnder interferometers operating in the 5.2 - 5.4 µm wavelength range. 3dB/cm waveguide losses and Mach-Zehnder interferometers with 20dB extinction ratio are presented.

  14. Derivation of the low Mach number diphasic system. Numerical simulation in mono-dimensional geometry; Derivation du systeme diphasique bas Mach. Simulation numerique en geometrie monodimensionnelle

    Energy Technology Data Exchange (ETDEWEB)

    Dellacherie, St

    2004-07-01

    This work deals with the derivation of a diphasic low Mach number model obtained through a Mach number asymptotic expansion applied to the compressible diphasic Navier Stokes system, expansion which filters out the acoustic waves. This approach is inspired from the work of Andrew Majda giving the equations of low Mach number combustion for thin flame and for perfect gases. When the equations of state verify some thermodynamic hypothesis, we show that the low Mach number diphasic system predicts in a good way the dilatation or the compression of a bubble and has equilibrium convergence properties. Then, we propose an entropic and convergent Lagrangian scheme in mono-dimensional geometry when the fluids are perfect gases and we propose a first approach in Eulerian variables where the interface between the two fluids is captured with a level set technique. (author)

  15. Ernst Mach: pedagog a technik

    Czech Academy of Sciences Publication Activity Database

    Těšínská, Emilie; Landa, Ivan; Drahoš, Jiří

    2016-01-01

    Roč. 66, č. 3 (2016), s. 167-174 ISSN 0009-0700 Institutional support: RVO:67985955 ; RVO:68378114 ; RVO:67985858 Keywords : Ernst Mach * pedagogy * experiments * general education * ballistics * Doppler principle Subject RIV: AB - History; CF - Physical ; Theoretical Chemistry (UCHP-M)

  16. Effects of Mach number on pitot-probe displacement in a turbulent boundary layer

    Science.gov (United States)

    Allen, J. M.

    1974-01-01

    Experimental pitot-probe-displacement data have been obtained in a turbulent boundary layer at a local free-stream Mach number of 4.63 and unit Reynolds number of 6.46 million meter. The results of this study were compared with lower Mach number results of previous studies. It was found that small probes showed displacement only, whereas the larger probes showed not only displacement but also distortion of the shape of the boundary-layer profile. The distortion pattern occurred lower in the boundary layer at the higher Mach number than at the the lower Mach number. The maximum distortion occurred when the center of the probe was about one probe diameter off the test surface. For probes in the wall contact position, the indicated Mach numbers were, for all probes tested, close to the true profile. Pitot-probe displacement was found to increase significantly with increasing Mach number.

  17. [Thought Experiments of Economic Surplus: Science and Economy in Ernst Mach's Epistemology].

    Science.gov (United States)

    Wulz, Monika

    2015-03-01

    Thought Experiments of Economic Surplus: Science and Economy in Ernst Mach's Epistemology. Thought experiments are an important element in Ernst Mach's epistemology: They facilitate amplifying our knowledge by experimenting with thoughts; they thus exceed the empirical experience and suspend the quest for immediate utility. In an economical perspective, Mach suggested that thought experiments depended on the production of an economic surplus based on the division of labor relieving the struggle for survival of the individual. Thus, as frequently emphasized, in Mach's epistemology, not only the 'economy of thought' is an important feature; instead, also the socioeconomic conditions of science play a decisive role. The paper discusses the mental and social economic aspects of experimental thinking in Mach's epistemology and examines those within the contemporary evolutionary, physiological, and economic contexts. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Mach's principle and the rest mass of the graviton

    International Nuclear Information System (INIS)

    Woodward, J.F.; Crowley, R.J.; Yourgrau, W.

    1975-01-01

    The question of the graviton rest mass is briefly discussed and then it is shown that the Sciama-Dicke formulation of Mach's principle admits, in the linear approximation, the calculation of the graviton rest mass. One finds that the value of the graviton rest mass depends on the cosmological model adopted, the mean matter density in the universe, the speed of light, and the constant of gravitation. The value obtained for an infinite, stationary universe is 7.6 times 10 -67 g. The value for evolutionary cosmological models is found to depend critically on the mass and ''radius'' of the universe, both null and non-null values occurring only for certain values of these parameters. Problems that arise as a consequence of the linear approximation are pointed out

  19. High Mach flow associated with plasma detachment in JT-60U

    International Nuclear Information System (INIS)

    Hatayama, A.; Hoshino, K.; Miyamoto, K.

    2003-01-01

    Recent new results of the high Mach flows associated with plasma detachment are presented on the basis of numerical simulations by a 2-D edge simulation code (the B2-Eirene code) and their comparisons with experiments in JT-60U W-shaped divertor plasma. High Mach flows appear near the ionization front away from the target plate. The plasma static pressure rapidly drops, while the total pressure is kept almost constant near the ionization front, because the ionization front near the X-point is clearly separated from the momentum loss region near the target plate. Redistribution from static to dynamic pressure without a large momentum loss is confirmed to be a possible mechanism of the high Mach flows. It has been also shown that the radial structure of the high Mach flow near the X point away from the target plate has a strong correlation with the DOD (Degree of Detachment) at the target plate. Also, we have made systematic analyses on the high Mach flows for both the 'Open' geometry and the 'W-shaped' geometry of JT-60U in order to clarify the geometric effects on the flows. (author)

  20. Reflected rarefactions, double regular reflection, and mach waves in aluminum and beryllium

    International Nuclear Information System (INIS)

    Neal, T.

    1975-01-01

    A number of shock techniques which can be used to obtain high-pressure equation-of-state information between the principal Hugoniot and the principal adiabat are illustrated. A rarefaction wave in aluminum shocked to 27.7 GPa [277 kbar] is examined with radiographic techniques and the bulk sound speed is determined. The two stage compression which occurs in a double shock may be attained by colliding two shocks and observing regular reflection. A radiographic method which uses this phenomenon to measure a three-stage compression of aluminum to a density of 4.7 Mg/m 3 and beryllium to a density of 3.1 Mg/m 3 is presented. The results of a Mach reflection experiment in aluminum are found to disagree substantially with the simple three-shock model. A modified model, consistent with observations, is discussed. In all cases the Gruneisen parameter is determined. (U.S.)

  1. 3-D Wizardry: Design in Papier-Mache, Plaster, and Foam.

    Science.gov (United States)

    Wolfe, George

    Papier-mache, plaster, and foam are inexpensive and versatile media for 3-dimensional classroom and studio art experiences. They can be used equally well by elementary, high school, or college students. Each medium has its own characteristic. Papier-mache is pliable but dries into a hard, firm surface that can be waterproofed. Plaster can be…

  2. Identification of melanoma cells: a method based in mean variance of signatures via spectral densities.

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely

    2017-04-01

    In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.

  3. How the mach phenomenon and shape affect the radiographic appearance of skeletal structures

    International Nuclear Information System (INIS)

    Papageorges, M.

    1991-01-01

    The shape of skeletal structures and their position relative to the x-ray beam have a considerable effect on their radiographic appearance. Depending on the thickness of the cortical or subchondral bone, skeletal structures display the characteristics of either homogeneous or compound lamellar structures. Convex homogeneous structures are associated with a negative Mach line, and concave homogeneous structures are associated with a positive Mach line. Convex compound lamellar structures are associated with a negative Mach band and visualization of the lamina (subchondral or cortical bone) is reduced. Concave compound lamellar structures are associated with a positive Mach band and visualization of the lamina is enhanced. The combined effect of Mach phenomenon, shape, and thickness enhances visualization of some skeletal surfaces and make others imperceptible. These principles are very useful to correctly identify complex skeletal structures and avoid misinterpretations

  4. Revisiting Einstein's Happiest Thought: On Ernst Mach and the Early History of Relativity

    Science.gov (United States)

    Staley, Richard

    2016-03-01

    This paper argues we should distinguish three phases in the formation of relativity. The first involved relational approaches to perception, and physiological and geometrical space and time in the 1860s and 70s. The second concerned electrodynamics and mechanics (special relativity). The third concerned mechanics, gravitation, and physical and geometrical space and time. Mach's early work on the Doppler effect, together with studies of visual and motor perception linked physiology, physics and psychology, and offered new approaches to physiological space and time. These informed the critical conceptual attacks on Newtonian absolutes that Mach famously outlined in The Science of Mechanics. Subsequently Mach identified a growing group of ``relativists,'' and his critiques helped form a foundation for later work in electrodynamics (in which he did not participate). Revisiting Mach's early work will suggest he was still more important to the development of new approaches to inertia and gravitation than has been commonly appreciated. In addition to what Einstein later called ``Mach's principle,'' I will argue that a thought experiment on falling bodies in Mach's Science of Mechanics also provided a point of inspiration for the happy thought that led Einstein to the equivalence principle.

  5. Low-Mach number simulations of transcritical flows

    KAUST Repository

    Lapenna, Pasquale E.

    2018-01-08

    A numerical framework for the direct simulation, in the low-Mach number limit, of reacting and non-reacting transcritical flows is presented. The key feature are an efficient and detailed representation of the real fluid properties and an high-order spatial discretization. The latter is of fundamental importance to correctly resolve the largely non-linear behavior of the fluid in the proximity of the pseudo-boiling. The validity of the low-Mach number assumptions is assessed for a previously developed non-reacting DNS database of transcritical and supercritical mixing. Fully resolved DNS data employing high-fidelity thermodynamical models are also used to investigate the spectral characteristic as well as the differences between transcritical and supercritical jets.

  6. On the instabilities of supersonic mixing layers - A high-Mach-number asymptotic theory

    Science.gov (United States)

    Balsa, Thomas F.; Goldstein, M. E.

    1990-01-01

    The stability of a family of tanh mixing layers is studied at large Mach numbers using perturbation methods. It is found that the eigenfunction develops a multilayered structure, and the eigenvalue is obtained by solving a simplified version of the Rayleigh equation (with homogeneous boundary conditions) in one of these layers which lies in either of the external streams. This analysis leads to a simple hypersonic similarity law which explains how spatial and temporal phase speeds and growth rates scale with Mach number and temperature ratio. Comparisons are made with numerical results, and it is found that this similarity law provides a good qualitative guide for the behavior of the instability at high Mach numbers. In addition to this asymptotic theory, some fully numerical results are also presented (with no limitation on the Mach number) in order to explain the origin of the hypersonic modes (through mode splitting) and to discuss the role of oblique modes over a very wide range of Mach number and temperature ratio.

  7. Hadron Azimuthal Correlations and Mach-like Structures in a Partonic/Hadronic Transport Model

    International Nuclear Information System (INIS)

    Ma, G.L.; Zhang, S.; Ma, Y.G.; Cai, X.Z.; Chen, J.H.; He, Z.J.; Huang, H.Z.; Long, J.L.; Shen, W.Q.; Shi, X.H.; Zhong, C.; Zuo, J.X.

    2007-01-01

    With a multi-phase transport model (AMPT) with both partonic and hadronic interactions, two- and three-particle azimuthal correlations in Au + Au collisions at s NN =200 GeV have been studied by the mixing-event technique. A Mach-like structure has been observed in two- and three-particle correlations in central collisions. It has been found that both partonic and hadronic dynamical mechanisms contribute to the Mach-like structure. However, only hadronic rescattering is unable to reproduce experimental amplitude of Mach-like structure, and parton cascade process is indispensable. The results of three-particle correlation indicate a partonic Mach-like shock wave can be produced by strong parton cascade in central Au+Au collisions

  8. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  9. Computation of Mach reflection from rigid and yielding surfaces

    International Nuclear Information System (INIS)

    Buckingham, A.C.; Wilson, S.S.

    1976-01-01

    The present discussion centers on a theoretical description of one aspect of the irregular or Mach reflection from solid surfaces. The discussion is restricted to analytical considerations and some preliminary results using model approximations to the surface interaction phenomena. Currently, full numerical simulations of the irregular reflection surface interaction dynamics have not been obtained since the method is still under development. Discussion of the numerical method is, therefore, restricted to some special procedures for the gas-solid surface boundary dynamics. The discussion is divided into an introductory section briefly describing a particular Mach reflection process. Subsequently, some of the considerations on boundary conditions are submitted for numerical treatment of the gas-solid interface. Analysis and discussion of a yielding solid surface subjected to impulsive loading from an intense gas shock wave follows. This is used as a guide for the development of the numerical procedure. Mach reflection processes are then briefly reviewed with special attention for similitude and singular perturbation features

  10. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  11. Local flow measurements at the inlet spike tip of a Mach 3 supersonic cruise airplane

    Science.gov (United States)

    Johnson, H. J.; Montoya, E. J.

    1973-01-01

    The flow field at the left inlet spike tip of a YF-12A airplane was examined using at 26 deg included angle conical flow sensor to obtain measurements at free-stream Mach numbers from 1.6 to 3.0. Local flow angularity, Mach number, impact pressure, and mass flow were determined and compared with free-stream values. Local flow changes occurred at the same time as free-stream changes. The local flow usually approached the spike centerline from the upper outboard side because of spike cant and toe-in. Free-stream Mach number influenced the local flow angularity; as Mach number increased above 2.2, local angle of attack increased and local sideslip angle decreased. Local Mach number was generally 3 percent less than free-stream Mach number. Impact-pressure ratio and mass flow ratio increased as free-stream Mach number increased above 2.2, indicating a beneficial forebody compression effect. No degradation of the spike tip instrumentation was observed after more than 40 flights in the high-speed thermal environment encountered by the airplane. The sensor is rugged, simple, and sensitive to small flow changes. It can provide accurate imputs necessary to control an inlet.

  12. The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data

    Directory of Open Access Journals (Sweden)

    Cassio Neri

    2014-05-01

    Full Text Available We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013 to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases.

  13. Numerical resolution of the Navier-Stokes equations for a low Mach number by a spectral method

    International Nuclear Information System (INIS)

    Frohlich, Jochen

    1990-01-01

    The low Mach number approximation of the Navier-Stokes equations, also called isobar, is an approximation which is less restrictive than the one due to Boussinesq. It permits strong density variations while neglecting acoustic phenomena. We present a numerical method to solve these equations in the unsteady, two dimensional case with one direction of periodicity. The discretization uses a semi-implicit finite difference scheme in time and a Fourier-Chebycheff pseudo-spectral method in space. The solution of the equations of motion is based on an iterative algorithm of Uzawa type. In the Boussinesq limit we obtain a direct method. A first application is concerned with natural convection in the Rayleigh-Benard setting. We compare the results of the low Mach number equations with the ones in the Boussinesq case and consider the influence of variable fluid properties. A linear stability analysis based on a Chebychev-Tau method completes the study. The second application that we treat is a case of isobaric combustion in an open domain. We communicate results for the hydrodynamic Darrieus-Landau instability of a plane laminar flame front. [fr

  14. High-Mach number, laser-driven magnetized collisionless shocks

    International Nuclear Information System (INIS)

    Schaeffer, Derek B.; Fox, W.; Haberberger, D.; Fiksel, G.; Bhattacharjee, A.

    2017-01-01

    Collisionless shocks are ubiquitous in space and astrophysical systems, and the class of supercritical shocks is of particular importance due to their role in accelerating particles to high energies. While these shocks have been traditionally studied by spacecraft and remote sensing observations, laboratory experiments can provide reproducible and multi-dimensional datasets that provide complementary understanding of the underlying microphysics. We present experiments undertaken on the OMEGA and OMEGA EP laser facilities that show the formation and evolution of high-Mach number collisionless shocks created through the interaction of a laser-driven magnetic piston and magnetized ambient plasma. Through time-resolved, 2-D imaging we observe large density and magnetic compressions that propagate at super-Alfvenic speeds and that occur over ion kinetic length scales. Electron density and temperature of the initial ambient plasma are characterized using optical Thomson scattering. Measurements of the piston laser-plasma are modeled with 2-D radiation-hydrodynamic simulations, which are used to initialize 2-D particle-in-cell simulations of the interaction between the piston and ambient plasmas. The numerical results show the formation of collisionless shocks, including the separate dynamics of the carbon and hydrogen ions that constitute the ambient plasma and their effect on the shock structure. Furthermore, the simulations also show the shock separating from the piston, which we observe in the data at late experimental times.

  15. Effects of rocket jet on stability and control at high Mach numbers

    Science.gov (United States)

    Fetterman, David E , Jr

    1958-01-01

    Paper presents the results of an investigation to determine the jet-interference effects which may occur at high jet static-pressure ratios and high Mach numbers. Tests were made in the Langley 11-inch hypersonic tunnel at a Mach number of 6.86.

  16. Recovery Temperature, Transition, and Heat Transfer Measurements at Mach 5

    Science.gov (United States)

    Brinich, Paul F.

    1961-01-01

    Schlieren, recovery temperature, and heat-transfer measurements were made on a hollow cylinder and a cone with axes alined parallel to the stream. Both the cone and cylinder were equipped with various bluntnesses, and the tests covered a Reynolds number range up to 20 x 10(exp 6) at a free-stream Mach number of 4.95 and wall to free-stream temperature ratios from 1.8 to 5.2 (adiabatic). A substantial transition delay due to bluntness was found for both the cylinder and the cone. For the present tests (Mach 4.95), transition was delayed by a factor of 3 on the cylinder and about 2 on the cone, these delays being somewhat larger than those observed in earlier tests at Mach 3.1. Heat-transfer tests on the cylinder showed only slight effects of wall temperature level on transition location; this is to be contrasted to the large transition delays observed on conical-type bodies at low surface temperatures at Mach 3.1. The schlieren and the peak-recovery-temperature methods of detecting transition were compared with the heat-transfer results. The comparison showed that the first two methods identified a transition point which occurred just beyond the end of the laminar run as seen in the heat-transfer data.

  17. MACH MIT: Deutsches Wochenende am Karlsfluss (MACH MIT: a German Week-End on the Charles River).

    Science.gov (United States)

    Reizes, Sonia; Kramsch, Claire J.

    1980-01-01

    Describes a joint high school/college pilot program planned by Massachusetts foreign language teachers and hosted by M.I.T. The success of the program dubbed "MACH MIT Total Immersion German Weekend" is attributed to the concept of active involvement, which was implemented through games, seminars, shows, cooking and other activities.…

  18. Numerical simulation of divergent rocket-based-combined-cycle performances under the flight condition of Mach 3

    Science.gov (United States)

    Cui, Peng; Xu, WanWu; Li, Qinglian

    2018-01-01

    Currently, the upper operating limit of the turbine engine is Mach 2+, and the lower limit of the dual-mode scramjet is Mach 4. Therefore no single power systems can operate within the range between Mach 2 + and Mach 4. By using ejector rockets, Rocket-based-combined-cycle can work well in the above scope. As the key component of Rocket-based-combined-cycle, the ejector rocket has significant influence on Rocket-based-combined-cycle performance. Research on the influence of rocket parameters on Rocket-based-combined-cycle in the speed range of Mach 2 + to Mach 4 is scarce. In the present study, influences of Mach number and total pressure of the ejector rocket on Rocket-based-combined-cycle were analyzed numerically. Due to the significant effects of the flight conditions and the Rocket-based-combined-cycle configuration on Rocket-based-combined-cycle performances, flight altitude, flight Mach number, and divergence ratio were also considered. The simulation results indicate that matching lower altitude with higher flight Mach numbers can increase Rocket-based-combined-cycle thrust. For another thing, with an increase of the divergent ratio, the effect of the divergent configuration will strengthen and there is a limit on the divergent ratio. When the divergent ratio is greater than the limit, the effect of divergent configuration will gradually exceed that of combustion on supersonic flows. Further increases in the divergent ratio will decrease Rocket-based-combined-cycle thrust.

  19. Reduction effect of neutral density on the excitation of turbulent drift waves in a linear magnetized plasma with flow

    International Nuclear Information System (INIS)

    Saitou, Y.; Yonesu, A.; Shinohara, S.; Ignatenko, M. V.; Kasuya, N.; Kawaguchi, M.; Terasaka, K.; Nishijima, T.; Nagashima, Y.; Kawai, Y.; Yagi, M.; Itoh, S.-I.; Azumi, M.; Itoh, K.

    2007-01-01

    The importance of reducing the neutral density to reach strong drift wave turbulence is clarified from the results of the extended magnetohydrodynamics and Monte Carlo simulations in a linear magnetized plasma. An upper bound of the neutral density relating to the ion-neutral collision frequency for the excitation of drift wave instability is shown, and the necessary flow velocity to excite this instability is also estimated from the neutral distributions. Measurements of the Mach number and the electron density distributions using Mach probe in the large mirror device (LMD) of Kyushu University [S. Shinohara et al., Plasma Phys. Control. Fusion 37, 1015 (1995)] are reported as well. The obtained results show a controllability of the neutral density and provide the basis for neutral density reduction and a possibility to excite strong drift wave turbulence in the LMD

  20. Analytic MHD Theory for Earth's Bow Shock at Low Mach Numbers

    Science.gov (United States)

    Grabbe, Crockett L.; Cairns, Iver H.

    1995-01-01

    A previous MHD theory for the density jump at the Earth's bow shock, which assumed the Alfven M(A) and sonic M(s) Mach numbers are both much greater than 1, is reanalyzed and generalized. It is shown that the MHD jump equation can be analytically solved much more directly using perturbation theory, with the ordering determined by M(A) and M(s), and that the first-order perturbation solution is identical to the solution found in the earlier theory. The second-order perturbation solution is calculated, whereas the earlier approach cannot be used to obtain it. The second-order terms generally are important over most of the range of M(A) and M(s) in the solar wind when the angle theta between the normal to the bow shock and magnetic field is not close to 0 deg or 180 deg (the solutions are symmetric about 90 deg). This new perturbation solution is generally accurate under most solar wind conditions at 1 AU, with the exception of low Mach numbers when theta is close to 90 deg. In this exceptional case the new solution does not improve on the first-order solutions obtained earlier, and the predicted density ratio can vary by 10-20% from the exact numerical MHD solutions. For theta approx. = 90 deg another perturbation solution is derived that predicts the density ratio much more accurately. This second solution is typically accurate for quasi-perpendicular conditions. Taken together, these two analytical solutions are generally accurate for the Earth's bow shock, except in the rare circumstance that M(A) is less than or = 2. MHD and gasdynamic simulations have produced empirical models in which the shock's standoff distance a(s) is linearly related to the density jump ratio X at the subsolar point. Using an empirical relationship between a(s) and X obtained from MHD simulations, a(s) values predicted using the MHD solutions for X are compared with the predictions of phenomenological models commonly used for modeling observational data, and with the predictions of a

  1. Mach-Like Structure in a Patronic-Hadronic Transport Model at RHIC Energies

    International Nuclear Information System (INIS)

    Ma, Y.G.; Ma, G.L.; Zhang, S.

    2008-01-01

    Recent RHIC experimental results indicated an exotic partonic matter may be created in central Au + Au collisions at dollars sqrt (s ( NN))dollars =200 GeV. When a parton with high transverse momentum (jet) passes through the new matter, jet will quench. The lost energy will be redistributed into the medium. Experimentally the soft scattered particles which carry the lost energy have been reconstructed via di-hadron angular correlations of charged particles and a hump structure on away side in di-hadron $ Delta phi$ correlation has been observed in central Au + Au collisions [1,2]. Some interpretations, such as Mach-cone shock wave and gluon Cherenkov-like radiation mechanism etc, have been proposed to explain the splitting behavior of the away side peaks. However, quantitative understanding of the experimental observation has yet to be established. In this work, we use a multi-phase transport (AMPT) model to make a detailed simulation for di-hadron or tri-hadron azimuthal correlation for central Au + Au collisions at dollars sqrt(s ( NN)) dollars =200 GeV. The hump structure on away side (we called Mach-like structure later) in the di-hadron and tri-hadron azimuthal correlations has been observed [3,4,5]. Furthermore, the time evolution of Mach-like structure is presented [6]. With the increasing of the lifetime of partonic matter, Mach-like structure develops by strong parton cascade process. Not only the splitting parameter but also the number of associated hadrons (dollarsN ( h) (assoc)dollars) increases with the lifetime of partonic matter and partonic interaction cross section. Both the explosion of dollarsN ( h) (assoc)dollars following the formation of Mach-like structure and the corresponding results of three-particle correlation support that a partonic Mach-like behavior can be produced by a collective coupling of partons because of the strong parton cascade mechanism. Therefore, the studies about Mach-like structure may give us some critical information

  2. Topology in Synthetic Column Density Maps for Interstellar Turbulence

    Science.gov (United States)

    Putko, Joseph; Burkhart, B. K.; Lazarian, A.

    2013-01-01

    We show how the topology tool known as the genus statistic can be utilized to characterize magnetohydrodyanmic (MHD) turbulence in the ISM. The genus is measured with respect to a given density threshold and varying the threshold produces a genus curve, which can suggest an overall ‘‘meatball,’’ neutral, or ‘‘Swiss cheese’’ topology through its integral. We use synthetic column density maps made from three-dimensional 5123 compressible MHD isothermal simulations performed for different sonic and Alfvénic Mach numbers (Ms and MA respectively). We study eight different Ms values each with one sub- and one super-Alfvénic counterpart. We consider sight-lines both parallel (x) and perpendicular (y and z) to the mean magnetic field. We find that the genus integral shows a dependence on both Mach numbers, and this is still the case even after adding beam smoothing and Gaussian noise to the maps to mimic observational data. The genus integral increases with higher Ms values (but saturates after about Ms = 4) for all lines of sight. This is consistent with greater values of Ms resulting in stronger shocks, which results in a clumpier topology. We observe a larger genus integral for the sub-Alfvénic cases along the perpendicular lines of sight due to increased compression from the field lines and enhanced anisotropy. Application of the genus integral to column density maps should allow astronomers to infer the Mach numbers and thus learn about the environments of interstellar turbulence. This work was supported by the National Science Foundation’s REU program through NSF Award AST-1004881.

  3. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  4. Optimization of OT-MACH Filter Generation for Target Recognition

    Science.gov (United States)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  5. High Energy Density Laboratory Astrophysics

    CERN Document Server

    Lebedev, Sergey V

    2007-01-01

    During the past decade, research teams around the world have developed astrophysics-relevant research utilizing high energy-density facilities such as intense lasers and z-pinches. Every two years, at the International conference on High Energy Density Laboratory Astrophysics, scientists interested in this emerging field discuss the progress in topics covering: - Stellar evolution, stellar envelopes, opacities, radiation transport - Planetary Interiors, high-pressure EOS, dense plasma atomic physics - Supernovae, gamma-ray bursts, exploding systems, strong shocks, turbulent mixing - Supernova remnants, shock processing, radiative shocks - Astrophysical jets, high-Mach-number flows, magnetized radiative jets, magnetic reconnection - Compact object accretion disks, x-ray photoionized plasmas - Ultrastrong fields, particle acceleration, collisionless shocks. These proceedings cover many of the invited and contributed papers presented at the 6th International Conference on High Energy Density Laboratory Astrophys...

  6. Elementary physical approach to Mach's principle and its observational basis

    International Nuclear Information System (INIS)

    Horak, Z.

    1979-01-01

    It is shown that Mach's principle and the general principle of relativity are logical consequences of a 'materialistic postulate' and that general relativity implies the validity of Mach's principle for a static (or quasistatic) homogeneous and isotropic universe, spatially self-enclosed. The finite velocity of propagation of gravitational field does not imply a retardation of inertial forces due to the distant masses and therefore does not exclude the validity of Mach's principle. Similarly, the experimentally verified isotropy of inertia is compatible with this principle. The recent observational evidence of very high isotropy of the actual universe proves that the 'anti-Machian' Godel world model must be rejected as a nonphysical one. This suggests the possibility of a renaissance of Einstein's first cosmological model by considering-in the spirit of an older idea of Herbert Dingle-a superlarge-scale quasistatic universe consisting of an unknown number of statistically oscillating regions similar to our own, momentarily expanding, metagalaxy. (author)

  7. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  8. Combustion-Powered Actuation for Dynamic Stall Suppression - Simulations and Low-Mach Experiments

    Science.gov (United States)

    Matalanis, Claude G.; Min, Byung-Young; Bowles, Patrick O.; Jee, Solkeun; Wake, Brian E.; Crittenden, Tom; Woo, George; Glezer, Ari

    2014-01-01

    An investigation on dynamic-stall suppression capabilities of combustion-powered actuation (COMPACT) applied to a tabbed VR-12 airfoil is presented. In the first section, results from computational fluid dynamics (CFD) simulations carried out at Mach numbers from 0.3 to 0.5 are presented. Several geometric parameters are varied including the slot chordwise location and angle. Actuation pulse amplitude, frequency, and timing are also varied. The simulations suggest that cycle-averaged lift increases of approximately 4% and 8% with respect to the baseline airfoil are possible at Mach numbers of 0.4 and 0.3 for deep and near-deep dynamic-stall conditions. In the second section, static-stall results from low-speed wind-tunnel experiments are presented. Low-speed experiments and high-speed CFD suggest that slots oriented tangential to the airfoil surface produce stronger benefits than slots oriented normal to the chordline. Low-speed experiments confirm that chordwise slot locations suitable for Mach 0.3-0.4 stall suppression (based on CFD) will also be effective at lower Mach numbers.

  9. Mach's principle in spatially homogeneous spacetimes

    International Nuclear Information System (INIS)

    Tipler, F.J.

    1978-01-01

    On the basis of Mach's Principle it is concluded that the only singularity-free solution to the empty space Einstein equations is flat space. It is shown that the only singularity-free solution to the empty space Einstein equations which is spatially homogeneous and globally hyperbolic is in fact suitably identified Minkowski space. (Auth.)

  10. Ernst Mach, George Sarton and the Empiry of Teaching Science Part I

    Science.gov (United States)

    Siemsen, Hayo

    2012-01-01

    George Sarton had a strong influence on modern history of science. The method he pursued throughout his life was the method he had discovered in Ernst Mach's "Mechanics" when he was a student in Ghent. Sarton was in fact throughout his life implementing a research program inspired by the epistemology of Mach. Sarton in turn inspired many…

  11. Coherence of Mach fronts during heterogeneous supershear earthquake rupture propagation: Simulations and comparison with observations

    Science.gov (United States)

    Bizzarri, A.; Dunham, Eric M.; Spudich, P.

    2010-01-01

    We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation

  12. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  13. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  14. Improving Euler computations at low Mach numbers

    NARCIS (Netherlands)

    Koren, B.; Leer, van B.; Deconinck, H.; Koren, B.

    1997-01-01

    The paper consists of two parts, both dealing with conditioning techniques for lowMach-number Euler-flow computations, in which a multigrid technique is applied. In the first part, for subsonic flows and upwind-discretized, linearized 1-D Euler equations, the smoothing behavior of

  15. Improving Euler computations at low Mach numbers

    NARCIS (Netherlands)

    Koren, B.

    1996-01-01

    This paper consists of two parts, both dealing with conditioning techniques for low-Mach-number Euler-flow computations, in which a multigrid technique is applied. In the first part, for subsonic flows and upwind-discretized linearized 1-D Euler equations, the smoothing behavior of

  16. Experimental study on thermal characteristics of positive leader discharges using Mach-Zehnder interferometry

    International Nuclear Information System (INIS)

    Zhou, X.; Zeng, R.; Zhuang, C.; Chen, S.

    2015-01-01

    Leader discharge is one of the main phases in long air gap breakdown, which is characterized by high temperature and high conductivity. It is of great importance to determine thermal characteristics of leader discharges. In this paper, a long-optical-path Mach-Zehnder interferometer was set up to measure the thermal parameters (thermal diameter, gas density, and gas temperature) of positive leader discharges in atmospheric air. IEC standard positive switching impulse voltages were applied to a near-one-meter point-plane air gap. Filamentary channels with high gas temperature and low density corresponding to leader discharges were observed as significant distortions in the interference fringe images. Typical diameters of the entire heated channel range from 1.5 mm to 3.5 mm with an average expansion velocity of 6.7 m/s. In contrast, typical diameters of the intensely heated region with a sharp gas density reduction range from 0.4 mm to 1.1 mm, about one third of the entire heated channel. The radial distribution of the gas density is calculated from the fringe displacements by performing an Abel inverse transform. The typical calculated gas density reduction in the center of a propagating leader channel is 80% to 90%, corresponding to a gas temperature of 1500 K to 3000 K based on the ideal gas law. Leaders tend to terminate if the central temperature is below 1500 K

  17. Experimental study on thermal characteristics of positive leader discharges using Mach-Zehnder interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, X., E-mail: zhouxuan12@mails.thu.edu.cn; Zeng, R.; Zhuang, C.; Chen, S. [Department of Electrical Engineering, Tsinghua University, Beijing 100084 (China)

    2015-06-15

    Leader discharge is one of the main phases in long air gap breakdown, which is characterized by high temperature and high conductivity. It is of great importance to determine thermal characteristics of leader discharges. In this paper, a long-optical-path Mach-Zehnder interferometer was set up to measure the thermal parameters (thermal diameter, gas density, and gas temperature) of positive leader discharges in atmospheric air. IEC standard positive switching impulse voltages were applied to a near-one-meter point-plane air gap. Filamentary channels with high gas temperature and low density corresponding to leader discharges were observed as significant distortions in the interference fringe images. Typical diameters of the entire heated channel range from 1.5 mm to 3.5 mm with an average expansion velocity of 6.7 m/s. In contrast, typical diameters of the intensely heated region with a sharp gas density reduction range from 0.4 mm to 1.1 mm, about one third of the entire heated channel. The radial distribution of the gas density is calculated from the fringe displacements by performing an Abel inverse transform. The typical calculated gas density reduction in the center of a propagating leader channel is 80% to 90%, corresponding to a gas temperature of 1500 K to 3000 K based on the ideal gas law. Leaders tend to terminate if the central temperature is below 1500 K.

  18. Derivation of the low Mach number diphasic system. Numerical simulation in mono-dimensional geometry

    International Nuclear Information System (INIS)

    Dellacherie, St.

    2004-01-01

    This work deals with the derivation of a diphasic low Mach number model obtained through a Mach number asymptotic expansion applied to the compressible diphasic Navier Stokes system, expansion which filters out the acoustic waves. This approach is inspired from the work of Andrew Majda giving the equations of low Mach number combustion for thin flame and for perfect gases. When the equations of state verify some thermodynamic hypothesis, we show that the low Mach number diphasic system predicts in a good way the dilatation or the compression of a bubble and has equilibrium convergence properties. Then, we propose an entropic and convergent Lagrangian scheme in mono-dimensional geometry when the fluids are perfect gases and we propose a first approach in Eulerian variables where the interface between the two fluids is captured with a level set technique. (author)

  19. Multilayer beam splitter used in a soft X-ray Mach-Zehnder interferometer at working wavelength of 13.9 nm

    International Nuclear Information System (INIS)

    Zhang Zhong; Wang Zhanshan; Wang Hongchang; Wang Fengli; Wu Wenjuan; Zhang Shumin; Qin Shuji; Chen Lingyan

    2006-01-01

    The soft X-ray Mach-Zehnder interferometer is an important tool in measuring the electron densities of laser-produced plasma near the critical surface. The design, fabrication and characterization of multilayer beam splitters at 13.9 nm for soft X-ray Mach-Zehnder interferometer are presented in the paper. The design of beam splitter is completed based on the standard of maximizing product of reflectivity and transmission of the beam splitter at 13.9 nm. The beam splitters, which are Mo/Si multi-layer deposited on 10 mm x 10 mm area, 100 nm thickness Si 3 N 4 membranes, are fabricated using the magnetron sputtering. A method based on extended He-Ne laser beam is developed to analyze the figure error of the beam splitters. The data measured by an optical profiler prove that the method based on visible light is effective to analyze the figure of the beam splitters. The rms figure error of a beam splitter reaches 1.757 nm in the center area 3.82 mm x 3.46 mm and satisfies the need of soft X-ray interference experiment. The product of reflectivity and transmission measured by synchrotron radiation is near to 4%. The Mach-Zehnder interferometer at 13.9 nm based on the multilayer beam splitters is used in 13.9 nm soft X-ray laser interference experiment, in which a clear interferograms of C 8 H 8 laser-produced plasma is got. (authors)

  20. A comparison of shock-cloud and wind-cloud interactions: effect of increased cloud density contrast on cloud evolution

    Science.gov (United States)

    Goldsmith, K. J. A.; Pittard, J. M.

    2018-05-01

    The similarities, or otherwise, of a shock or wind interacting with a cloud of density contrast χ = 10 were explored in a previous paper. Here, we investigate such interactions with clouds of higher density contrast. We compare the adiabatic hydrodynamic interaction of a Mach 10 shock with a spherical cloud of χ = 103 with that of a cloud embedded in a wind with identical parameters to the post-shock flow. We find that initially there are only minor morphological differences between the shock-cloud and wind-cloud interactions, compared to when χ = 10. However, once the transmitted shock exits the cloud, the development of a turbulent wake and fragmentation of the cloud differs between the two simulations. On increasing the wind Mach number, we note the development of a thin, smooth tail of cloud material, which is then disrupted by the fragmentation of the cloud core and subsequent `mass-loading' of the flow. We find that the normalized cloud mixing time (tmix) is shorter at higher χ. However, a strong Mach number dependence on tmix and the normalized cloud drag time, t_{drag}^' }, is not observed. Mach-number-dependent values of tmix and t_{drag}^' } from comparable shock-cloud interactions converge towards the Mach-number-independent time-scales of the wind-cloud simulations. We find that high χ clouds can be accelerated up to 80-90 per cent of the wind velocity and travel large distances before being significantly mixed. However, complete mixing is not achieved in our simulations and at late times the flow remains perturbed.

  1. Hyper-X Mach 7 Scramjet Design, Ground Test and Flight Results

    Science.gov (United States)

    Ferlemann, Shelly M.; McClinton, Charles R.; Rock, Ken E.; Voland, Randy T.

    2005-01-01

    The successful Mach 7 flight test of the Hyper-X (X-43) research vehicle has provided the major, essential demonstration of the capability of the airframe integrated scramjet engine. This flight was a crucial first step toward realizing the potential for airbreathing hypersonic propulsion for application to space launch vehicles. However, it is not sufficient to have just achieved a successful flight. The more useful knowledge gained from the flight is how well the prediction methods matched the actual test results in order to have confidence that these methods can be applied to the design of other scramjet engines and powered vehicles. The propulsion predictions for the Mach 7 flight test were calculated using the computer code, SRGULL, with input from computational fluid dynamics (CFD) and wind tunnel tests. This paper will discuss the evolution of the Mach 7 Hyper-X engine, ground wind tunnel experiments, propulsion prediction methodology, flight results and validation of design methods.

  2. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  3. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  4. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    Energy Technology Data Exchange (ETDEWEB)

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  5. Rotating detectors and Mach's principle

    International Nuclear Information System (INIS)

    Paola, R.D.M. de; Svaiter, N.F.

    2000-08-01

    In this work we consider a quantum version of Newton s bucket experiment in a fl;at spacetime: we take an Unruh-DeWitt detector in interaction with a real massless scalar field. We calculate the detector's excitation rate when it is uniformly rotating around some fixed point and the field is prepared in the Minkowski vacuum and also when the detector is inertial and the field is in the Trocheries-Takeno vacuum state. These results are compared and the relations with Mach's principle are discussed. (author)

  6. Di-hadron azimuthal correlation and Mach-like cone structure in a parton/hadron transport model

    International Nuclear Information System (INIS)

    Ma, G.L.; Zhang, S.; Ma, Y.G.; Huang, H.Z.; Cai, X.Z.; Chen, J.H.; He, Z.J.; Long, J.L.; Shen, W.Q.; Shi, X.H.; Zuo, J.X.

    2006-01-01

    In the framework of a multi-phase transport model (AMPT) with both partonic and hadronic interactions, azimuthal correlations between trigger particles and associated scattering particles have been studied by the mixing-event technique. The momentum ranges of these particles are 3 T trig T assoc T trig T assoc NN =200 GeV. A Mach-like structure has been observed in correlation functions for central collisions. By comparing scenarios with and without parton cascade and hadronic rescattering, we show that both partonic and hadronic dynamical mechanisms contribute to the Mach-like structure of the associated particle azimuthal correlations. The contribution of hadronic dynamical process cannot be ignored in the emergence of Mach-like correlations of the soft scattered associated hadrons. However, hadronic rescattering alone cannot reproduce experimental amplitude of Mach-like cone on away-side, and the parton cascade process is essential to describe experimental amplitude of Mach-like cone on away-side. In addition, both the associated multiplicity and the sum of p T decrease, while the T > increases, with the impact parameter in the AMPT model including partonic dynamics from string melting scenario

  7. Aeroacoustic computation of low Mach number flow

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, K.S.

    1996-12-01

    This thesis explores the possibilities of applying a recently developed numerical technique to predict aerodynamically generated sound from wind turbines. The technique is a perturbation technique that has the advantage that the underlying flow field and the sound field are computed separately. Solution of the incompressible, time dependent flow field yields a hydrodynamic density correction to the incompressible constant density. The sound field is calculated from a set of equations governing the inviscid perturbations about the corrected flow field. Here, the emphasis is placed on the computation of the sound field. The nonlinear partial differential equations governing the sound field are solved numerically using an explicit MacCormack scheme. Two types of non-reflecting boundary conditions are applied; one based on the asymptotic solution of the governing equations and the other based on a characteristic analysis of the governing equations. The former condition is easy to use and it performs slightly better than the characteristic based condition. The technique is applied to the problems of the sound generation of a pulsating sphere, which is a monopole; a co-rotating vortex pair, which is a quadrupole, and the viscous flow over a circular cylinder, which is a dipole. The governing equations are written and solved for spherical, Cartesian, and cylindrical coordinates, respectively, thus, representing three common orthogonal coordinate systems. Numerical results agree very well with the analytical solutions for the problems of the pulsating sphere and the co-rotating vortex pair. Numerical results for the viscous flow over a cylinder are presented and evaluated qualitatively. The technique has potential for applications to airfoil flows as they are on a wind turbine blade, as well as for other low Mach number flows. (au) 2 tabs., 33 ills., 48 refs.

  8. Supersonic and transonic Mach probe for calibration control in the Trisonic Wind Tunnel

    Directory of Open Access Journals (Sweden)

    Alexandru Marius PANAIT

    2017-12-01

    Full Text Available A supersonic and high speed transonic Pitot Prandtl is described as it can be implemented in the Trisonic Wind Tunnel for calibration and verification of Mach number precision. A new calculation method for arbitrary precision Mach numbers is proposed and explained. The probe is specially designed for the Trisonic wind tunnel and would greatly simplify obtaining a precise Mach calibration in the critical high transonic and low supersonic regimes, where typically wind tunnels exhibit poor performance. The supersonic Pitot Prandtl combined probe is well known in the aerospace industry, however the proposed probe is a derivative of the standard configuration, combining a stout cone-cylinder probe with a supersonic Pitot static port which allows this configuration to validate the Mach number by three methods: conical flow method – using the pressure ports on a cone generatrix, the Schlieren-optical method of shock wave angle photogrammetry and the Rayleigh supersonic Pitot equation, while having an aerodynamic blockage similar to that of a scaled rocket model commonly used in testing. The proposed probe uses an existing cone-cylinder probe forebody and support, adding only an afterbody with a support for a static port.

  9. Increased Mach Number Capability for the NASA Glenn 10x10 Supersonic Wind Tunnel

    Science.gov (United States)

    Slater, J. W.; Saunders, J. D.

    2015-01-01

    Computational simulations and wind tunnel testing were conducted to explore the operation of the Abe Silverstein Supersonic Wind Tunnel at the NASA Glenn Research Center at test section Mach numbers above the current limit of Mach 3.5. An increased Mach number would enhance the capability for testing of supersonic and hypersonic propulsion systems. The focus of the explorations was on understanding the flow within the second throat of the tunnel, which is downstream of the test section and is where the supersonic flow decelerates to subsonic flow. Methods of computational fluid dynamics (CFD) were applied to provide details of the shock boundary layer structure and to estimate losses in total pressure. The CFD simulations indicated that the tunnel could be operated up to Mach 4.0 if the minimum width of the second throat was made smaller than that used for previous operation of the tunnel. Wind tunnel testing was able to confirm such operation of the tunnel at Mach 3.6 and 3.7 before a hydraulic failure caused a stop to the testing. CFD simulations performed after the wind tunnel testing showed good agreement with test data consisting of static pressures along the ceiling of the second throat. The CFD analyses showed increased shockwave boundary layer interactions, which was also observed as increased unsteadiness of dynamic pressures collected in the wind tunnel testing.

  10. Unraveling the genetic architecture of environmental variance of somatic cell score using high-density single nucleotide polymorphism and cow data from experimental farms.

    Science.gov (United States)

    Mulder, H A; Crump, R E; Calus, M P L; Veerkamp, R F

    2013-01-01

    In recent years, it has been shown that not only is the phenotype under genetic control, but also the environmental variance. Very little, however, is known about the genetic architecture of environmental variance. The main objective of this study was to unravel the genetic architecture of the mean and environmental variance of somatic cell score (SCS) by identifying genome-wide associations for mean and environmental variance of SCS in dairy cows and by quantifying the accuracy of genome-wide breeding values. Somatic cell score was used because previous research has shown that the environmental variance of SCS is partly under genetic control and reduction of the variance of SCS by selection is desirable. In this study, we used 37,590 single nucleotide polymorphism (SNP) genotypes and 46,353 test-day records of 1,642 cows at experimental research farms in 4 countries in Europe. We used a genomic relationship matrix in a double hierarchical generalized linear model to estimate genome-wide breeding values and genetic parameters. The estimated mean and environmental variance per cow was used in a Bayesian multi-locus model to identify SNP associated with either the mean or the environmental variance of SCS. Based on the obtained accuracy of genome-wide breeding values, 985 and 541 independent chromosome segments affecting the mean and environmental variance of SCS, respectively, were identified. Using a genomic relationship matrix increased the accuracy of breeding values relative to using a pedigree relationship matrix. In total, 43 SNP were significantly associated with either the mean (22) or the environmental variance of SCS (21). The SNP with the highest Bayes factor was on chromosome 9 (Hapmap31053-BTA-111664) explaining approximately 3% of the genetic variance of the environmental variance of SCS. Other significant SNP explained less than 1% of the genetic variance. It can be concluded that fewer genomic regions affect the environmental variance of SCS than the

  11. Mach Stability Improvements Using an Existing Second Throat Capability at the National Transonic Facility

    Science.gov (United States)

    Chan, David T.; Balakrishna, Sundareswara; Walker, Eric L.; Goodliff, Scott L.

    2015-01-01

    Recent data quality improvements at the National Transonic Facility have an intended goal of reducing the Mach number variation in a data point to within plus or minus 0.0005, with the ultimate goal of reducing the data repeatability of the drag coefficient for full-span subsonic transport models at transonic speeds to within half a drag count. This paper will discuss the Mach stability improvements achieved through the use of an existing second throat capability at the NTF to create a minimum area at the end of the test section. These improvements were demonstrated using both the NASA Common Research Model and the NTF Pathfinder-I model in recent experiments. Sonic conditions at the throat were verified using sidewall static pressure data. The Mach variation levels from both experiments in the baseline tunnel configuration and the choked tunnel configuration will be presented and the correlation between Mach number and drag will also be examined. Finally, a brief discussion is given on the consequences of using the second throat in its location at the end of the test section.

  12. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  13. Interplay between Mach cone and radial expansion in jet events

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Y., E-mail: tachibana@nt.phys.s.u-tokyo.ac.jp [Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198 (Japan); Department of Engineering, Nishinippon Institute of Technology, Fukuoka 800-0344 (Japan); Department of Physics, Sophia University, Tokyo 102-8554 (Japan); Hirano, T., E-mail: hirano@sophia.ac.jp [Department of Physics, Sophia University, Tokyo 102-8554 (Japan)

    2016-12-15

    We study the hydrodynamic response to jet propagation in the expanding QGP and investigate how the particle spectra after the hydrodynamic evolution of the QGP reflect it. We perform simulations of the space-time evolution of the QGP in gamma-jet events by solving (3+1)-dimensional ideal hydrodynamic equations with source terms. Mach cone is induced by the jet energy deposition and pushes back the radial flow of the expanding background. Especially in the case when the jet passage is off-central one, the number of particles emitted in the direction of the push back decreases. This is the signal including the information about the formation of the Mach cone and the jet passage in the QGP fluid.

  14. Interplay between Mach cone and radial expansion in jet events

    International Nuclear Information System (INIS)

    Tachibana, Y.; Hirano, T.

    2016-01-01

    We study the hydrodynamic response to jet propagation in the expanding QGP and investigate how the particle spectra after the hydrodynamic evolution of the QGP reflect it. We perform simulations of the space-time evolution of the QGP in gamma-jet events by solving (3+1)-dimensional ideal hydrodynamic equations with source terms. Mach cone is induced by the jet energy deposition and pushes back the radial flow of the expanding background. Especially in the case when the jet passage is off-central one, the number of particles emitted in the direction of the push back decreases. This is the signal including the information about the formation of the Mach cone and the jet passage in the QGP fluid.

  15. Human vision model in relation to characteristics of shapes for the Mach band effect.

    Science.gov (United States)

    Wu, Bo-Wen; Fang, Yi-Chin

    2015-10-01

    For human vision to recognize the contours of objects means that, as the contrast variation at the object's edges increases, so will the Mach band effect of human vision. This paper more deeply investigates the relationship between changes in the contours of an object and the Mach band effect of human vision. Based on lateral inhibition and the Mach band effect, we studied subjects' eyes as they watched images of different shapes under a fixed brightness at 34  cd/m2, with changes of contrast and spatial frequency. Three types of display were used: a television, a computer monitor, and a projector. For each display used, we conducted a separate experiment for each shape. Although the maximum values for the contrast sensitivity function curves of the displays were different, their variations were minimal. As the spatial frequency changed, the diminishing effect of the different lines also was minimal. However, as the shapes at the contour intersections were modified by the Mach band effect, a greater degree of variation occurred. In addition, as the spatial frequency at a contour intersection increased, the Mach band effect became lower, along with changes in the corresponding contrast sensitivity function curve. Our experimental results on the characteristics of human vision have led to what we believe is a new vision model based on tests with different shapes. This new model may be used for future development and implementation of an artificial vision system.

  16. Branch xylem density variations across the Amazon Basin

    Science.gov (United States)

    Patiño, S.; Lloyd, J.; Paiva, R.; Baker, T. R.; Quesada, C. A.; Mercado, L. M.; Schmerler, J.; Schwarz, M.; Santos, A. J. B.; Aguilar, A.; Czimczik, C. I.; Gallo, J.; Horna, V.; Hoyos, E. J.; Jimenez, E. M.; Palomino, W.; Peacock, J.; Peña-Cruz, A.; Sarmiento, C.; Sota, A.; Turriago, J. D.; Villanueva, B.; Vitzthum, P.; Alvarez, E.; Arroyo, L.; Baraloto, C.; Bonal, D.; Chave, J.; Costa, A. C. L.; Herrera, R.; Higuchi, N.; Killeen, T.; Leal, E.; Luizão, F.; Meir, P.; Monteagudo, A.; Neil, D.; Núñez-Vargas, P.; Peñuela, M. C.; Pitman, N.; Priante Filho, N.; Prieto, A.; Panfil, S. N.; Rudas, A.; Salomão, R.; Silva, N.; Silveira, M.; Soares Dealmeida, S.; Torres-Lezama, A.; Vásquez-Martínez, R.; Vieira, I.; Malhi, Y.; Phillips, O. L.

    2009-04-01

    Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m-3 for a Cordia sagotii (Boraginaceae) from Mountagne de Tortue, French Guiana to 1130 kg m-3 for an Aiouea sp. (Lauraceae) from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species) accounted for 33% with environment (geographic location and plot) accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  17. Interferometer for electron density measurement in exploding wire plasma

    International Nuclear Information System (INIS)

    Batra, Jigyasa; Jaiswar, Ashutosh; Kaushik, T.C.

    2016-12-01

    Mach-Zehnder Interferometer (MZI) has been developed for measuring electron density profile in pulsed plasmas. MZI is to be used for characterizing exploding wire plasmas for correlating electron density dynamics with x-rays emission. Experiments have been carried out for probing electron density in pulsed plasmas produced in our laboratory like in spark gap and exploding wire plasmas. These are microsecond phenomenon. Changes in electron density have been registered in interferograms with the help of a streak camera for specific time window. Temporal electron density profiles have been calculated by analyzing temporal fringe shifts in interferograms. This report deals with details of MZI developed in our laboratory along with its theory. Basic introductory details have also been provided for exploding wire plasmas to be probed. Some demonstrative results of electron density measurements in pulsed plasmas of spark gap and single exploding wires have been described. (author)

  18. Study and discretization of kinetic models and fluid models at low Mach number

    International Nuclear Information System (INIS)

    Dellacherie, Stephane

    2011-01-01

    This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified

  19. Ernst Mach, George Sarton and the Empiry of Teaching Science Part I

    Science.gov (United States)

    Siemsen, Hayo

    2012-04-01

    George Sarton had a strong influence on modern history of science. The method he pursued throughout his life was the method he had discovered in Ernst Mach's Mechanics when he was a student in Ghent. Sarton was in fact throughout his life implementing a research program inspired by the epistemology of Mach. Sarton in turn inspired many others (James Conant, Thomas Kuhn, Gerald Holton, etc.). What were the origins of these ideas in Mach and what can this origin tell us about the history of science and science education nowadays? Which ideas proved to be successful and which ones need to be improved upon? The following article will elaborate the epistemological questions, which Darwin's "Origin" raised concerning human knowledge and scientific knowledge and which led Mach to adapt the concept of what is "empirical" in contrast to metaphysical a priori assumptions a second time after Galileo. On this basis Sarton proposed "genesis and development" as the major goal of Isis. Mach had elaborated this epistemology in La Connaissance et l'Erreur ( Knowledge and Error), which Sarton read in 1913 (Hiebert 1905/1976; de Mey 1984). Accordingly for Sarton, history becomes not only a subject of science, but a method of science education. Culture—and science as part of culture—is a result of a genetic process. History of science shapes and is shaped by science and science education in a reciprocal process. Its epistemology needs to be adapted to scientific facts and the philosophy of science. Sarton was well aware of the need to develop the history of science and the philosophy of science along the lines of this reciprocal process. It was a very fruitful basis, but a specific part of it, Sarton did not elaborate further, namely the psychology of science education. This proved to be a crucial missing element for all of science education in Sarton's succession, especially in the US. Looking again at the origins of the central questions in the thinking of Mach, which provided

  20. CO2 laser interferometer for temporally and spatially resolved electron density measurements

    Science.gov (United States)

    Brannon, P. J.; Gerber, R. A.; Gerardo, J. B.

    1982-09-01

    A 10.6-μm Mach-Zehnder interferometer has been constructed to make temporally and spatially resolved measurements of electron densities in plasmas. The device uses a pyroelectric vidicon camera and video memory to record and display the two-dimensional fringe pattern and a Pockels cell to limit the pulse width of the 10.6-μm radiation. A temporal resolution of 14 ns has been demonstrated. The relative sensitivity of the device for electron density measurements is 2×1015 cm-2 (the line integral of the line-of-sight length and electron density), which corresponds to 0.1 fringe shift.

  1. CO2 laser interferometer for temporally and spatially resolved electron density measurements

    International Nuclear Information System (INIS)

    Brannon, P.J.; Gerber, R.A.; Gerardo, J.B.

    1982-01-01

    A 10.6-μm Mach--Zehnder interferometer has been constructed to make temporally and spatially resolved measurements of electron densities in plasmas. The device uses a pyroelectric vidicon camera and video memory to record and display the two-dimensional fringe pattern and a Pockels cell to limit the pulse width of the 10.6-μm radiation. A temporal resolution of 14 ns has been demonstrated. The relative sensitivity of the device for electron density measurements is 2 x 10 15 cm -2 (the line integral of the line-of-sight length and electron density), which corresponds to 0.1 fringe shift

  2. In-stream measurements of combustion during Mach 5 to 7 tests of the Hypersonic Research Engine (HRE)

    Science.gov (United States)

    Lezberg, Erwin A.; Metzler, Allen J.; Pack, William D.

    1993-01-01

    Results of in-stream combustion measurements taken during Mach 5 to 7 true simulation testing of the Hypersonic Research Engine/Aerothermodynamic Integration Model (HRE/AIM) are presented. These results, the instrumentation techniques, and configuration changes to the engine installation that were required to test this model are described. In test runs at facility Mach numbers of 5 to 7, an exhaust instrumentation ring which formed an extension of the engine exhaust nozzle shroud provided diagnostic measurements at 10 circumferential locations in the HRE combustor exit plane. The measurements included static and pitot pressures using conventional conical probes, combustion gas temperatures from cooled-gas pyrometer probes, and species concentration from analysis of combustion gas samples. Results showed considerable circumferential variation, indicating that efficiency losses were due to nonuniform fuel distribution or incomplete mixing. Results using the Mach 7 facility nozzle but with Mach 6 temperature simulation, 1590 to 1670 K, showed indications of incomplete combustion. Nitric oxide measurements at the combustor exit peaked at 2000 ppmv for stoichiometric combustion at Mach 6.

  3. Energy, Metaphysics, and Space: Ernst Mach's Interpretation of Energy Conservation as the Principle of Causality

    Science.gov (United States)

    Guzzardi, Luca

    2014-06-01

    This paper discusses Ernst Mach's interpretation of the principle of energy conservation (EC) in the context of the development of energy concepts and ideas about causality in nineteenth-century physics and theory of science. In doing this, it focuses on the close relationship between causality, energy conservation and space in Mach's antireductionist view of science. Mach expounds his thesis about EC in his first historical-epistemological essay, Die Geschichte und die Wurzel des Satzes von der Erhaltung der Arbeit (1872): far from being a new principle, it is used from the early beginnings of mechanics independently from other principles; in fact, EC is a pre-mechanical principle which is generally applied in investigating nature: it is, indeed, nothing but a form of the principle of causality. The paper focuses on the scientific-historical premises and philosophical underpinnings of Mach's thesis, beginning with the classic debate on the validity and limits of the notion of cause by Hume, Kant, and Helmholtz. Such reference also implies a discussion of the relationship between causality on the one hand and space and time on the other. This connection plays a major role for Mach, and in the final paragraphs its importance is argued in order to understand his antireductionist perspective, i.e. the rejection of any attempt to give an ultimate explanation of the world via reduction of nature to one fundamental set of phenomena.

  4. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  5. On integral formulation of the Mach principle in a conformally flat space

    International Nuclear Information System (INIS)

    Mal'tsev, V.K.

    1976-01-01

    The integral formulation of the Mach principle represents a rather complicated mathematical formalism in which many aspects of the physical content of theory are not clear. Below an attempt is made to consider the integral representation for the most simple case of conformally flat spaces. The fact that this formalism there is only one scalar function makes it possible to analyse in more detail many physical peculiarities of this representation of the Mach principle: the absence of asymptotically flat spaces, problems of inertia and gravity, constraints on state equations, etc

  6. Microwave background anisotropies and the primordial spectrum of cosmological density fluctuations

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gouda, Naoteru; Sugiyama, Naoshi

    1990-01-01

    Microwave background anisotropies in various cosmological scenarios are studied. In particular, the extent to which nonscale-invariant spectra of the primordial density fluctuations are consistent with the observational upper limits is examined. The resultant constraints are summarized as contours on (n, Omega)-plane, where n is the power-law index of the primordial spectrum of density fuctuations and Omega is the cosmological density parameter. They are compared also with the constraints from the cosmic Mach number test, recently proposed by Ostriker and Suto (1990). The parameter regions which pass both tests are not consistent with the theoretical prejudice inspired by the inflationary model. 44 refs

  7. Effect of Mach number on thermoelectric performance of SiC ceramics nose-tip for supersonic vehicles

    International Nuclear Information System (INIS)

    Han, Xiao-Yi; Wang, Jun

    2014-01-01

    This paper focus on the effects of Mach number on thermoelectric energy conversion for the limitation of aero-heating and the feasibility of energy harvesting on supersonic vehicles. A model of nose-tip structure constructed with SiC ceramics is developed to numerically study the thermoelectric performance in a supersonic flow field by employing the computational fluid dynamics and the thermal conduction theory. Results are given in the cases of different Mach numbers. Moreover, the thermoelectric performance in each case is predicted with and without Thomson heat, respectively. Due to the increase of Mach number, both the temperature difference and the conductive heat flux between the hot side and the cold side of nose tip are increased. This results in the growth of the thermoelectric power generated and the energy conversion efficiency. With respect to the Thomson effect, over 50% of total power generated converts to Thomson heat, which greatly reduces the thermoelectric power and efficiency. However, whether the Thomson effect is considered or not, with the Mach number increasing from 2.5 to 4.5, the thermoelectric performance can be effectively improved. -- Highlights: • Thermoelectric SiC nose-tip structure for aerodynamic heat harvesting of high-speed vehicles is studied. • Thermoelectric performance is predicted based on numerical methods and experimental thermoelectric parameters. • The effects of Mach number on thermoelectric performance are studied in the present paper. • Results with respect to the Thomson effect are also explored. • Output power and energy efficiency of the thermoelectric nose-tip are increased with the increase of Mach number

  8. Branch xylem density variations across the Amazon Basin

    Directory of Open Access Journals (Sweden)

    S. Patiño

    2009-04-01

    Full Text Available Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m−3 for a Cordia sagotii (Boraginaceae from Mountagne de Tortue, French Guiana to 1130 kg m−3 for an Aiouea sp. (Lauraceae from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species accounted for 33% with environment (geographic location and plot accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  9. Very high Mach number shocks - Theory. [in space plasmas

    Science.gov (United States)

    Quest, Kevin B.

    1986-01-01

    The theory and simulation of collisionless perpendicular supercritical shock structure is reviewed, with major emphasis on recent research results. The primary tool of investigation is the hybrid simulation method, in which the Newtonian orbits of a large number of ion macroparticles are followed numerically, and in which the electrons are treated as a charge neutralizing fluid. The principal results include the following: (1) electron resistivity is not required to explain the observed quasi-stationarity of the earth's bow shock, (2) the structure of the perpendicular shock at very high Mach numbers depends sensitively on the upstream value of beta (the ratio of the thermal to magnetic pressure) and electron resistivity, (3) two-dimensional turbulence will become increasingly important as the Mach number is increased, and (4) nonadiabatic bulk electron heating will result when a thermal electron cannot complete a gyrorbit while transiting the shock.

  10. Iodine Tagging Velocimetry in a Mach 10 Wake

    Science.gov (United States)

    Balla, Robert Jeffrey

    2013-01-01

    A variation on molecular tagging velocimetry (MTV) [1] designated iodine tagging velocimetry (ITV) is demonstrated. Molecular iodine is tagged by two-photon absorption using an Argon Fluoride (ArF) excimer laser. A single camera measures fluid displacement using atomic iodine emission at 206 nm. Two examples ofMTVfor cold-flowmeasurements areN2OMTV [2] and Femtosecond Laser Electronic Excitation Tagging [3]. These, like most MTV methods, are designed for atmospheric pressure applications. Neither can be implemented at the low pressures (0.1- 1 Torr) in typical hypersonic wakes. Of all the single-laser/singlecamera MTV approaches, only Nitric-Oxide Planar Laser Induced Fluorescence-based MTV [4] has been successfully demonstrated in a Mach 10 wake. Oxygen quenching limits transit times to 500 ns and accuracy to typically 30%. The present note describes the photophysics of the ITV method. Off-body velocimetry along a line is demonstrated in the aerothermodynamically important and experimentally challenging region of a hypersonic low-pressure near-wake in a Mach 10 air wind tunnel. Transit times up to 10 µs are demonstrated with conservative errors of 10%.

  11. MACHe3: A new generation detector for non-baryonic dark matter direct detection

    International Nuclear Information System (INIS)

    Santos, D.; Mayet, F.; Perrin, G.; Moulin, E.; Bunkov, Yu. M.; Godfrin, H.; Krusius, M.

    2002-01-01

    MACHe3 (MAtrix of Cells of superfluid 3 H e) is a project of a new detector for direct Dark Matter (DM) search, using superfluid 3 He as a sensitive medium. An experiment on a prototype cell has been performed and the st results reported here are encouraging to develop of a multicell prototype. In order to investigate the discovery potential of MACHe3, and its complementarity with other DM detectors, a phenomenological study done with the DarkSUSY code is shown. (authors)

  12. ELECTRON ACCELERATIONS AT HIGH MACH NUMBER SHOCKS: TWO-DIMENSIONAL PARTICLE-IN-CELL SIMULATIONS IN VARIOUS PARAMETER REGIMES

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Yosuke [Department of Physics, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522 (Japan); Amano, Takanobu; Hoshino, Masahiro, E-mail: ymatumot@astro.s.chiba-u.ac.jp [Department of Earth and Planetary Science, University of Tokyo, Hongo 1-33, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2012-08-20

    Electron accelerations at high Mach number collisionless shocks are investigated by means of two-dimensional electromagnetic particle-in-cell simulations with various Alfven Mach numbers, ion-to-electron mass ratios, and the upstream electron {beta}{sub e} (the ratio of the thermal pressure to the magnetic pressure). We find electrons are effectively accelerated at a super-high Mach number shock (M{sub A} {approx} 30) with a mass ratio of M/m = 100 and {beta}{sub e} = 0.5. The electron shock surfing acceleration is an effective mechanism for accelerating the particles toward the relativistic regime even in two dimensions with a large mass ratio. Buneman instability excited at the leading edge of the foot in the super-high Mach number shock results in a coherent electrostatic potential structure. While multi-dimensionality allows the electrons to escape from the trapping region, they can interact with the strong electrostatic field several times. Simulation runs in various parameter regimes indicate that the electron shock surfing acceleration is an effective mechanism for producing relativistic particles in extremely high Mach number shocks in supernova remnants, provided that the upstream electron temperature is reasonably low.

  13. A study of sonic boom overpressure trends with respect to weight, altitude, Mach number, and vehicle shaping

    Science.gov (United States)

    Needleman, Kathy E.; Mack, Robert J.

    1990-01-01

    This paper presents and discusses trends in nose shock overpressure generated by two conceptual Mach 2.0 configurations. One configuration was designed for high aerodynamic efficiency, while the other was designed to produce a low boom, shaped-overpressure signature. Aerodynamic lift, sonic boom minimization, and Mach-sliced/area-rule codes were used to analyze and compute the sonic boom characteristics of both configurations with respect to cruise Mach number, weight, and altitude. The influence of these parameters on the overpressure and the overpressure trends are discussed and conclusions are given.

  14. Pinch density measurements in compact plasma foci of 400J and 50J

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    A Mach-Zehnder interferometer using a pulsed Nd-YAG laser (600 mJ, 532 nm, 8 ns) was implemented to measure the electron density and the dimensions of the pinch column in two sub-kJ compact plasma focus devices operating at hundred joules (PF-400J) and tens of joules (PF-50J).

  15. Cost-effective evolution of research prototypes into end-user tools: The MACH case study

    DEFF Research Database (Denmark)

    Störrle, Harald

    2017-01-01

    's claim by fellow scientists, and (3) demonstrate the utility and value of the research contribution to any interested parties. However, turning an exploratory prototype into a “proper” tool for end-users often entails great effort. Heavyweight mainstream frameworks such as Eclipse do not address...... this issue; their steep learning curves constitute substantial entry barriers to such ecosystems. In this paper, we present the Model Analyzer/Checker (MACH), a stand-alone tool with a command-line interpreter. MACH integrates a set of research prototypes for analyzing UML models. By choosing a simple...... command line interpreter rather than (costly) graphical user interface, we achieved the core goal of quickly deploying research results to a broader audience while keeping the required effort to an absolute minimum. We analyze MACH as a case study of how requirements and constraints in an academic...

  16. Destructive role of hot ions in the formation of electrostatic density humps and dips in dusty plasmas

    International Nuclear Information System (INIS)

    Mahmood, S.; Saleem, H.

    2003-01-01

    It is shown that the ion thermal energy is destructive for the ion acoustic solitons in the presence of dust, and it decreases the value of Mach number for the formation of solitary structures. The regions of ion density humps and dips are produced simultaneously, corresponding to positive and negative values of the electrostatic potential. The nonlinear electron density also behaves in a similar fashion as that of ions. However, the dust density increases in the regions where the ion and electron densities are depleted and vice versa

  17. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  18. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  19. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  20. Low Mach number limits of compressible rotating fluids

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard

    2012-01-01

    Roč. 14, č. 1 (2012), s. 61-78 ISSN 1422-6928 R&D Projects: GA ČR GA201/08/0315 Institutional research plan: CEZ:AV0Z10190503 Keywords : low Mach number limit * rotating fluid * compressible fluid Subject RIV: BA - General Mathematics Impact factor: 1.415, year: 2012 http://www.springerlink.com/content/635r1116j40t6428/

  1. Acoustic Radiation From a Mach 14 Turbulent Boundary Layer

    Science.gov (United States)

    Zhang, Chao; Duan, Lian; Choudhari, Meelan M.

    2016-01-01

    Direct numerical simulations (DNS) are used to examine the turbulence statistics and the radiation field generated by a high-speed turbulent boundary layer with a nominal freestream Mach number of 14 and wall temperature of 0:18 times the recovery temperature. The flow conditions fall within the range of nozzle exit conditions of the Arnold Engineering Development Center (AEDC) Hypervelocity Tunnel No. 9 facility. The streamwise domain size is approximately 200 times the boundary-layer thickness at the inlet, with a useful range of Reynolds number corresponding to Re 450 ?? 650. Consistent with previous studies of turbulent boundary layer at high Mach numbers, the weak compressibility hypothesis for turbulent boundary layers remains applicable under this flow condition and the computational results confirm the validity of both the van Driest transformation and Morkovin's scaling. The Reynolds analogy is valid at the surface; the RMS of fluctuations in the surface pressure, wall shear stress, and heat flux is 24%, 53%, and 67% of the surface mean, respectively. The magnitude and dominant frequency of pressure fluctuations are found to vary dramatically within the inner layer (z/delta 0.< or approx. 0.08 or z+ < or approx. 50). The peak of the pre-multiplied frequency spectrum of the pressure fluctuation is f(delta)/U(sub infinity) approx. 2.1 at the surface and shifts to a lower frequency of f(delta)/U(sub infinity) approx. 0.7 in the free stream where the pressure signal is predominantly acoustic. The dominant frequency of the pressure spectrum shows a significant dependence on the freestream Mach number both at the wall and in the free stream.

  2. Mach number scaling of helicopter rotor blade/vortex interaction noise

    Science.gov (United States)

    Leighton, Kenneth P.; Harris, Wesley L.

    1985-01-01

    A parametric study of model helicopter rotor blade slap due to blade vortex interaction (BVI) was conducted in a 5 by 7.5-foot anechoic wind tunnel using model helicopter rotors with two, three, and four blades. The results were compared with a previously developed Mach number scaling theory. Three- and four-bladed rotor configurations were found to show very good agreement with the Mach number to the sixth power law for all conditions tested. A reduction of conditions for which BVI blade slap is detected was observed for three-bladed rotors when compared to the two-bladed baseline. The advance ratio boundaries of the four-bladed rotor exhibited an angular dependence not present for the two-bladed configuration. The upper limits for the advance ratio boundaries of the four-bladed rotors increased with increasing rotational speed.

  3. An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations

    Science.gov (United States)

    Li, Chung-Gang; Tsubokura, Makoto

    2017-09-01

    The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.

  4. Exact statistical results for binary mixing and reaction in variable density turbulence

    Science.gov (United States)

    Ristorcelli, J. R.

    2017-02-01

    We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ ⁣2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ ⁣2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived

  5. Exploring the MACH Model’s Potential as a Metacognitive Tool to Help Undergraduate Students Monitor Their Explanations of Biological Mechanisms

    Science.gov (United States)

    Trujillo, Caleb M.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    When undergraduate biology students learn to explain biological mechanisms, they face many challenges and may overestimate their understanding of living systems. Previously, we developed the MACH model of four components used by expert biologists to explain mechanisms: Methods, Analogies, Context, and How. This study explores the implementation of the model in an undergraduate biology classroom as an educational tool to address some of the known challenges. To find out how well students’ written explanations represent components of the MACH model before and after they were taught about it and why students think the MACH model was useful, we conducted an exploratory multiple case study with four interview participants. We characterize how two students explained biological mechanisms before and after a teaching intervention that used the MACH components. Inductive analysis of written explanations and interviews showed that MACH acted as an effective metacognitive tool for all four students by helping them to monitor their understanding, communicate explanations, and identify explanatory gaps. Further research, though, is needed to more fully substantiate the general usefulness of MACH for promoting students’ metacognition about their understanding of biological mechanisms. PMID:27252295

  6. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  7. Characteristics of the mach disk in the underexpanded jet in which the back pressure continuously changes with time

    Science.gov (United States)

    Irie, T.; Yasunobu, T.; Kashimura, H.; Setoguchi, T.

    2003-05-01

    When the high-pressure gas is exhausted to the vacuum chamber from the nozzle, the underexpanded supersonic jet contained with the Mach disk is generally formed. The eventual purpose of this study is to clarify the unsteady phenomenon of the underexpanded free jet when the back pressure continuously changes with time. The characteristic of the Mach disk has been clarified in consideration of the diameter and position of it by the numerical analysis in this paper. The sonic jet of the exit Mach number Me=1 is assumed and the axisymmetric conservational equation is solved by the TVD method in the numerical calculation. The diameter and position of the Mach disk differs with the results of a steady jet and the influence on the continuously changing of the back pressure is evidenced from the comparison with the case of steady supersonic jet.

  8. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  9. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  10. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  11. Simulating variable-density flows with time-consistent integration of Navier-Stokes equations

    Science.gov (United States)

    Lu, Xiaoyi; Pantano, Carlos

    2017-11-01

    In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.

  12. All-silicon thermal independent Mach-Zehnder interferometer with multimode waveguides

    DEFF Research Database (Denmark)

    Guan, Xiaowei; Frandsen, Lars Hagedorn

    2016-01-01

    A novel all-silicon thermal independent Mach-Zehnder interferometer consisting of two multimode waveguide arms having equal lengths and widths but transmitting different modes is proposed and experimentally demonstrated. The interferometer has a temperature sensitivity smaller than 8pm/°C in a wa...

  13. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  14. Adaptive multilevel mesh refinement method for the solution of low Mach number reactive flows; Methode adaptative de raffinement local multi-niveaux pour le calcul d'ecoulements reactifs a faible nombre de Mach

    Energy Technology Data Exchange (ETDEWEB)

    Core, X.

    2002-02-01

    The isobar approximation for the system of the balance equations of mass, momentum, energy and chemical species is a suitable approximation to represent low Mach number reactive flows. In this approximation, which neglects acoustics phenomena, the mixture is hydrodynamically incompressible and the thermodynamic effects lead to an uniform compression of the system. We present a novel numerical scheme for this approximation. An incremental projection method, which uses the original form of mass balance equation, discretizes in time the Navier-Stokes equations. Spatial discretization is achieved through a finite volume approach on MAC-type staggered mesh. A higher order de-centered scheme is used to compute the convective fluxes. We associate to this discretization a local mesh refinement method, based on Flux Interface Correction technique. A first application concerns a forced flow with variable density which mimics a combustion problem. The second application is natural convection with first small temperature variations and then beyond the limit of validity of the Boussinesq approximation. Finally, we treat a third application which is a laminar diffusion flame. For each of these test problems, we demonstrate the robustness of the proposed numerical scheme, notably for the density spatial variations. We analyze the gain in accuracy obtained with the local mesh refinement method. (author)

  15. Quantitative Global Heat Transfer in a Mach-6 Quiet Tunnel

    Science.gov (United States)

    Sullivan, John P.; Schneider, Steven P.; Liu, Tianshu; Rubal, Justin; Ward, Chris; Dussling, Joseph; Rice, Cody; Foley, Ryan; Cai, Zeimin; Wang, Bo; hide

    2012-01-01

    This project developed quantitative methods for obtaining heat transfer from temperature sensitive paint (TSP) measurements in the Mach-6 quiet tunnel at Purdue, which is a Ludwieg tube with a downstream valve, moderately-short flow duration and low levels of heat transfer. Previous difficulties with inferring heat transfer from TSP in the Mach-6 quiet tunnel were traced to (1) the large transient heat transfer that occurs during the unusually long tunnel startup and shutdown, (2) the non-uniform thickness of the insulating coating, (3) inconsistencies and imperfections in the painting process and (4) the low levels of heat transfer observed on slender models at typical stagnation temperatures near 430K. Repeated measurements were conducted on 7 degree-half-angle sharp circular cones at zero angle of attack in order to evaluate the techniques, isolate the problems and identify solutions. An attempt at developing a two-color TSP method is also summarized.

  16. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  17. Krypton tagging velocimetry in a turbulent Mach 2.7 boundary layer

    Science.gov (United States)

    Zahradka, D.; Parziale, N. J.; Smith, M. S.; Marineau, E. C.

    2016-05-01

    The krypton tagging velocimetry (KTV) technique is applied to the turbulent boundary layer on the wall of the "Mach 3 Calibration Tunnel" at Arnold Engineering Development Complex (AEDC) White Oak. Profiles of velocity were measured with KTV and Pitot-pressure probes in the Mach 2.7 turbulent boundary layer comprised of 99 % {N}2/1 % Kr at momentum-thickness Reynolds numbers of {Re}_{\\varTheta }= 800, 1400, and 2400. Agreement between the KTV- and Pitot-derived velocity profiles is excellent. The KTV and Pitot velocity data follow the law of the wall in the logarithmic region with application of the Van Driest I transformation. The velocity data are analyzed in the outer region of the boundary layer with the law of the wake and a velocity-defect law. KTV-derived streamwise velocity fluctuation measurements are reported and are consistent with data from the literature. To enable near-wall measurement with KTV (y/δ ≈ 0.1-0.2), an 800-nm longpass filter was used to block the 760.2-nm read-laser pulse. With the longpass filter, the 819.0-nm emission from the re-excited Kr can be imaged to track the displacement of the metastable tracer without imaging the reflection and scatter from the read-laser off of solid surfaces. To operate the Mach 3 AEDC Calibration Tunnel at several discrete unit Reynolds numbers, a modification was required and is described herein.

  18. Generation of cylindrically convergent shockwaves in water on the MACH facility

    Science.gov (United States)

    Bland, Simon; Krasik, Ya. E.; Yanuka, D.; Gardner, R.; MacDonald, J.; Virozub, A.; Efimov, S.; Gleizer, S.; Chaturvedi, N.

    2017-06-01

    We report on the first experiments utilizing MACH facility at Imperial College London to explode copper wire arrays in water, generating extremely symmetric, cylindrical convergent shockwaves. The experiments were carried out with 10mm diameter arrays consisting of 60 × 130 μm wires, and currents >500 kA were achieved despite the high inductance load. Laser backlit framing images and streak photography of the implosion showed a highly uniform, stable shockwave that travelled towards the axis at velocities up to 7.5 kms-1. For the first time, imaging of the shock front has been carried at radii 1 Mbar are produced within 10 μm of the axis, with water densities 3 gcm-3 and temperatures of many 1000 s of Kelvin. The results represent a significant step in the application of the technique to drive different material samples, and calculations of scaling the technique to larger pulsed power facilities are presented. This work was supported by the Institute of Shock Physics, funded by AWE Aldermaston, and the NNSA under DOE Cooperative Agreement Nos. DE-F03-02NA00057 and DE-SC-0001063.

  19. Si-nanowire-based multistage delayed Mach-Zehnder interferometer optical MUX/DeMUX fabricated by an ArF-immersion lithography process on a 300 mm SOI wafer.

    Science.gov (United States)

    Jeong, Seok-Hwan; Shimura, Daisuke; Simoyama, Takasi; Horikawa, Tsuyoshi; Tanaka, Yu; Morito, Ken

    2014-07-01

    We report good phase controllability and high production yield in Si-nanowire-based multistage delayed Mach-Zehnder interferometer-type optical multiplexers/demultiplexers (MUX/DeMUX) fabricated by an ArF-immersion lithography process on a 300 mm silicon-on-insulator (SOI) wafer. Three kinds of devices fabricated in this work exhibit clear 1×4 Ch wavelength filtering operations for various optical frequency spacing. These results are promising for their applications in high-density wavelength division multiplexing-based optical interconnects.

  20. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  1. Mach-Zehnder interferometer implementation for thermo-optical and Kerr effect study

    Science.gov (United States)

    Bundulis, Arturs; Nitiss, Edgars; Busenbergs, Janis; Rutkis, Martins

    2018-04-01

    In this paper, we propose the Mach-Zehnder interferometric method for third-order nonlinear optical and thermo-optical studies. Both effects manifest themselves as refractive index dependence on the incident light intensity and are widely employed for multiple opto-optical and thermo-optical applications. With the implemented method, we have measured the Kerr and thermo-optical coefficients of chloroform under CW, ns and ps laser irradiance. The application of lasers with different light wavelengths, pulse duration and energy allowed us to distinguish the processes responsible for refractive index changes in the investigated solution. Presented setup was also used for demonstration of opto-optical switching. Results from Mach-Zehnder experiment were compared to Z-scan data obtained in our previous studies. Based on this, a quality comparison of both methods was assessed and advantages and disadvantages of each method were analyzed.

  2. Low Mach number asymptotics for reacting compressible fluid flows

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Petzeltová, Hana

    2010-01-01

    Roč. 26, č. 2 (2010), s. 455-480 ISSN 1078-0947 R&D Projects: GA ČR GA201/05/0164 Institutional research plan: CEZ:AV0Z10190503 Keywords : low Mach number * Navier-Stokes-Fourier system * reacting fluids Subject RIV: BA - General Mathematics Impact factor: 0.986, year: 2010 http://www.aimsciences.org/journals/displayArticles.jsp?paperID=4660

  3. Study on the high speed scramjet characteristics at Mach 10 to 15 flight condition

    Science.gov (United States)

    Takahashi, M.; Itoh, K.; Tanno, H.; Komuro, T.; Sunami, T.; Sato, K.; Ueda, S.

    A scramjet engine model, designed to establish steady and strong combustion at free-stream conditions corresponding to Mach 12 flight, was tested in a large free-piston driven shock tunnel. Combustion tests of a previous engine model showed that combustion heat release obtained in the combustor was not sufficient to maintain strong combustion. For a new scramjet engine model, the inlet compression ratio was increased to raise the static temperature and density of the flow at the combustor entrance. As a result of the aerodynamic design change, the pressure rise due to combustion increased and the duration of strong combustion conditions in the combustor was extended. A hyper-mixer injector designed to enhance mixing and combustion by introducing streamwise vortices was applied to the new engine model. The results showed that the hyper mixer injector was very effective in promoting combustion heat release and establishing steady and strong combustion in the combustor.

  4. Rotating detectors and Mach's principle

    Energy Technology Data Exchange (ETDEWEB)

    Paola, R.D.M. de; Svaiter, N.F

    2000-08-01

    In this work we consider a quantum version of Newton{sup s} bucket experiment in a fl;at spacetime: we take an Unruh-DeWitt detector in interaction with a real massless scalar field. We calculate the detector's excitation rate when it is uniformly rotating around some fixed point and the field is prepared in the Minkowski vacuum and also when the detector is inertial and the field is in the Trocheries-Takeno vacuum state. These results are compared and the relations with Mach's principle are discussed. (author)

  5. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  6. Simultaneous measurements of temperature and density in air flows using UV laser spectroscopy

    Science.gov (United States)

    Fletcher, D. G.; Mckenzie, R. L.

    1991-01-01

    The simultaneous measurement of temperature and density using laser-induced fluorescence of oxygen in combination with Q-branch Raman scattering of nitrogen and oxygen is demonstrated in a low-speed air flow. The lowest density and temperature measured in the experiment correspond to the freestream values at Mach 5 in the Ames 3.5-Foot Hypersonic Wind Tunnel for stagnation conditions of 100 atm and 1000 K. The experimental results demonstrate the viability of the optical technique for measurements that support the study of compressible turbulence and the validation of numerical codes in supersonic and hypersonic wind tunnel flows.

  7. Unraveling the genetic architecture of environmental variance of somatic cell score using high-density single nucleotide polymorphism and cow data from experimental farms

    NARCIS (Netherlands)

    Mulder, H.A.; Crump, R.E.; Calus, M.P.L.; Veerkamp, R.F.

    2013-01-01

    In recent years, it has been shown that not only is the phenotype under genetic control, but also the environmental variance. Very little, however, is known about the genetic architecture of environmental variance. The main objective of this study was to unravel the genetic architecture of the mean

  8. Analysis of gas turbine engines using water and oxygen injection to achieve high Mach numbers and high thrust

    Science.gov (United States)

    Henneberry, Hugh M.; Snyder, Christopher A.

    1993-01-01

    An analysis of gas turbine engines using water and oxygen injection to enhance performance by increasing Mach number capability and by increasing thrust is described. The liquids are injected, either separately or together, into the subsonic diffuser ahead of the engine compressor. A turbojet engine and a mixed-flow turbofan engine (MFTF) are examined, and in pursuit of maximum thrust, both engines are fitted with afterburners. The results indicate that water injection alone can extend the performance envelope of both engine types by one and one-half Mach numbers at which point water-air ratios reach 17 or 18 percent and liquid specific impulse is reduced to some 390 to 470 seconds, a level about equal to the impulse of a high energy rocket engine. The envelope can be further extended, but only with increasing sacrifices in liquid specific impulse. Oxygen-airflow ratios as high as 15 percent were investigated for increasing thrust. Using 15 percent oxygen in combination with water injection at high supersonic Mach numbers resulted in thrust augmentation as high as 76 percent without any significant decrease in liquid specific impulse. The stoichiometric afterburner exit temperature increased with increasing oxygen flow, reaching 4822 deg R in the turbojet engine at a Mach number of 3.5. At the transonic Mach number of 0.95 where no water injection is needed, an oxygen-air ratio of 15 percent increased thrust by some 55 percent in both engines, along with a decrease in liquid specific impulse of 62 percent. Afterburner temperature was approximately 4700 deg R at this high thrust condition. Water and/or oxygen injection are simple and straightforward strategies to improve engine performance and they will add little to engine weight. However, if large Mach number and thrust increases are required, liquid flows become significant, so that operation at these conditions will necessarily be of short duration.

  9. Measuring the Density of a Molecular Cluster Injector via Visible Emission from an Electron Beam

    Energy Technology Data Exchange (ETDEWEB)

    Lundberg, D. P.; Kaita, R.; Majeski, R. M.; Stotler, D. P.

    2010-06-28

    A method to measure the density distribution of a dense hydrogen gas jet is pre- sented. A Mach 5.5 nozzle is cooled to 80K to form a flow capable of molecular cluster formation. A 250V, 10mA electron beam collides with the jet and produces Hα emission that is viewed by a fast camera. The high density of the jet, several 1016cm-3, results in substantial electron depletion, which attenuates the Hα emission. The attenuated emission measurement, combined with a simplified electron-molecule collision model, allows us to determine the molecular density profile via a simple iterative calculation.

  10. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    Science.gov (United States)

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  11. Experimental evaluation of wall Mach number distributions of the octagonal test section proposed for NASA Lewis Research Center's altitude wind tunnel

    Science.gov (United States)

    Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.

    1986-01-01

    Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.

  12. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  13. Penetration Characteristics of Air, Carbon Dioxide and Helium Transverse Sonic Jets in Mach 5 Cross Flow

    Directory of Open Access Journals (Sweden)

    Erinc Erdem

    2014-12-01

    Full Text Available An experimental investigation of sonic air, CO2 and Helium transverse jets in Mach 5 cross flow was carried out over a flat plate. The jet to freestream momentum flux ratio, J, was kept the same for all gases. The unsteady flow topology was examined using high speed schlieren visualisation and PIV. Schlieren visualisation provided information regarding oscillating jet shear layer structures and bow shock, Mach disc and barrel shocks. Two-component PIV measurements at the centreline, provided information regarding jet penetration trajectories. Barrel shocks and Mach disc forming the jet boundary were visualised/quantified also jet penetration boundaries were determined. Even though J is kept the same for all gases, the penetration patterns were found to be remarkably different both at the nearfield and the farfield. Air and CO2 jet resulted similar nearfield and farfield penetration pattern whereas Helium jet spread minimal in the nearfield.

  14. Internal structure of laser supported detonation waves by two-wavelength Mach-Zehnder interferometer

    International Nuclear Information System (INIS)

    Shimamura, Kohei; Kawamura, Koichi; Fukuda, Akio; Wang Bin; Yamaguchi, Toshikazu; Komurasaki, Kimiya; Hatai, Keigo; Fukui, Akihiro; Arakawa, Yoshihiro

    2011-01-01

    Characteristics of the internal structure of the laser supported detonation (LSD) waves, such as the electron density n e and the electron temperature T e profiles behind the shock wave were measured using a two-wavelength Mach-Zehnder interferometer along with emission spectroscopy. A TEA CO 2 laser with energy of 10 J/pulse produced explosive laser heating in atmospheric air. Results show that the peak values of n e and T e were, respectively, about 2 x 10 24 m -3 and 30 000 K, during the LSD regime. The temporal variation of the laser absorption coefficient profile estimated from the measured properties reveals that the laser energy was absorbed perfectly in a thin layer behind the shock wave during the LSD regime, as predicted by Raizer's LSD model. However, the absorption layer was much thinner than a plasma layer, the situation of which was not considered in Raizer's model. The measured n e at the shock front was not zero while the LSD was supported, which implies that the precursor electrons exist ahead of the shock wave.

  15. Mach probe interpretation in the presence of supra-thermal electrons

    Czech Academy of Sciences Publication Activity Database

    Fuchs, Vladimír; Gunn, J. P.

    2007-01-01

    Roč. 14, č. 3 (2007), 032501-1 ISSN 1070-664X R&D Projects: GA ČR GA202/04/0360 Institutional research plan: CEZ:AV0Z20430508 Keywords : Mach probes * supra -thermal electrons * quasi-neutral PIC codes Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.325, year: 2007

  16. Software-aided discussion about classical picture of Mach-Zehnder interferometer

    Science.gov (United States)

    Cavalcanti, C. J. H.; Ostermann, F.; Lima, N. W.; Netto, J. S.

    2017-11-01

    The Mach-Zehnder interferometer has played an important role both in quantum and classical physics research over the years. In physics education, it has been used as a didactic tool for quantum physics teaching, allowing fundamental concepts, such as particle-wave duality, to be addressed from the very beginning. For a student to understand the novelties of the quantum scenario, it is first worth introducing the classical picture. In this paper, we introduce a new version of the software developed by our research group to deepen the discussion on the classical picture of the Mach-Zehnder interferometer. We present its equivalence with the double slit experiment and we derive the mathematical expressions relating to the interference pattern. We also explore the concept of visibility (which is very important for understanding wave-particle complementarity in quantum physics) to help students become familiar with this experiment and to enhance their knowledge of its counterintuitive aspects. We use the software articulated by the mathematical formalism and phenomenological features. We also present excerpts of the discursive interactions of students using the software in didactic situations.

  17. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    International Nuclear Information System (INIS)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases

  18. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  19. Column Number Density Expressions Through M = 0 and M = 1 Point Source Plumes Along Any Straight Path

    Science.gov (United States)

    Woronowicz, Michael

    2016-01-01

    Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M = 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plume's axis. For sonic plumes this ratio is reduced to about 4/3. For high Mach number cases the maximum CND will be found along the axial centerline path. Keywords: column number density, plume flows, outgassing, free molecule flow.

  20. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  1. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    National Research Council Canada - National Science Library

    Tan, Choon S

    2008-01-01

    In high-stage loading high-Mach number (HLM) compressors, counter-rotating pairs of discrete vortices are shed at the trailing edge of the upstream blade row at a frequency corresponding to the downstream rotor blade passing frequency...

  2. Physical and numerical modelling of low mach number compressible flows

    International Nuclear Information System (INIS)

    Paillerre, H.; Clerc, S.; Dabbene, F.; Cueto, O.

    1999-01-01

    This article reviews various physical models that may be used to describe compressible flow at low Mach numbers, as well as the numerical methods developed at DRN to discretize the different systems of equations. A selection of thermal-hydraulic applications illustrate the need to take into account compressibility and multidimensional effects as well as variable flow properties. (authors)

  3. A versatile all-optical modulator based on nonlinear Mach-Zehnder interferometers

    NARCIS (Netherlands)

    Krijnen, Gijsbertus J.M.; Villeneuve, A.; Stegeman, G.I.; Lambeck, Paul; Hoekstra, Hugo

    1994-01-01

    A device based on a Nonlinear Mach-Zehnder interferometer (NMI) which exploits cross-phase modulation of two co-propagating modes in bimodal branches has been described in this paper. The advantage of this device is that it becomes polarisation independent while keeping phase insensitive by using

  4. Experiments on a hot plume base flow interaction at Mach 2

    NARCIS (Netherlands)

    Blinde, P.L.; Schrijer, F.F.J.; Powell, S.J.; Werner, R.M.; Van Oudheusden, B.W.

    2015-01-01

    A wind tunnel model containing a solid rocket motor was tested at Mach 2 to assess the feasibility of investigating the interaction between a hot plume and a high-speed outer stream. In addition to Schlieren visualisation, the feasibility of applying PIV was explored. Recorded particle images

  5. Hypervelocity Wind Tunnel No. 9 Mach 7 Thermal Structural Facility Verification and Calibration

    National Research Council Canada - National Science Library

    Lafferty, John

    1996-01-01

    This report summarizes the verification and calibration of the new Mach 7 Thermal Structural Facility located at the White Oak, Maryland, site of the Dahlgren Division, Naval Surface Warfare Center...

  6. Assessment of a transitional boundary layer theory at low hypersonic Mach numbers

    Science.gov (United States)

    Shamroth, S. J.; Mcdonald, H.

    1972-01-01

    An investigation was carried out to assess the accuracy of a transitional boundary layer theory in the low hypersonic Mach number regime. The theory is based upon the simultaneous numerical solution of the boundary layer partial differential equations for the mean motion and an integral form of the turbulence kinetic energy equation which controls the magnitude and development of the Reynolds stress. Comparisions with experimental data show the theory is capable of accurately predicting heat transfer and velocity profiles through the transitional regime and correctly predicts the effects of Mach number and wall cooling on transition Reynolds number. The procedure shows promise of predicting the initiation of transition for given free stream disturbance levels. The effects on transition predictions of the pressure dilitation term and of direct absorption of acoustic energy by the boundary layer were evaluated.

  7. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  8. Numerical simulation of low Mach number reacting flows

    International Nuclear Information System (INIS)

    Bell, J B; Aspden, A J; Day, M S; Lijewski, M J

    2007-01-01

    Using examples from active research areas in combustion and astrophysics, we demonstrate a computationally efficient numerical approach for simulating multiscale low Mach number reacting flows. The method enables simulations that incorporate an unprecedented range of temporal and spatial scales, while at the same time, allows an extremely high degree of reaction fidelity. Sample applications demonstrate the efficiency of the approach with respect to a traditional time-explicit integration method, and the utility of the methodology for studying the interaction of turbulence with terrestrial and astrophysical flame structures

  9. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  10. Design of an Optical OR Gate using Mach-Zehnder Interferometers

    Science.gov (United States)

    Choudhary, Kuldeep; Kumar, Santosh

    2018-04-01

    The optical switching phenomenon enhances the speed of optical communication systems. It is widely used in the wavelength division multiplexing (WDM). In this work, an optical OR gate is proposed using the Mach-Zehnder interferometer (MZI) structure. The detailed derivation of mathematical expression have been shown. The analysis is carried out by simulating the proposed device with MATLAB and Beam propagation method.

  11. Ernst Mach and George Sarton's Successors: The Implicit Role Model of Teaching Science in USA and Elsewhere, Part II

    Science.gov (United States)

    Siemsen, Hayo

    2013-01-01

    George Sarton had a strong influence on modern history of science. The method he pursued throughout his life was the method he had discovered in Ernst Mach's "Mechanics" when he was a student in Ghent. Sarton was in fact throughout his life implementing a research program inspired by the epistemology of Mach. Sarton in turn inspired many…

  12. Dark matter versus Mach's principle.

    Science.gov (United States)

    von Borzeszkowski, H.-H.; Treder, H.-J.

    1998-02-01

    Empirical and theoretical evidence show that the astrophysical problem of dark matter might be solved by a theory of Einstein-Mayer type. In this theory up to global Lorentz rotations the reference system is determined by the motion of cosmic matter. Thus one is led to a "Riemannian space with teleparallelism" realizing a geometric version of the Mach-Einstein doctrine. The field equations of this gravitational theory contain hidden matter terms where the existence of hidden matter is inferred safely from its gravitational effects. It is argued that in the nonrelativistic mechanical approximation they provide an inertia-free mechanics where the inertial mass of a body is induced by the gravitational action of the comic masses. Interpreted form the Newtonian point of view this mechanics shows that the effective gravitational mass of astrophysical objects depends on r such that one expects the existence of dark matter.

  13. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  14. Analisis Perbandingan Kinerja Mach-Zehnder berdasarkan Ragam Format Modulasi pada Jaringan FTTH

    Directory of Open Access Journals (Sweden)

    ZULIA NURUL KARIMAH

    2017-06-01

    Full Text Available ABSTRAKPada jurnal ini dibuat pemodelan link FTTH pada software Optisystem 7.0 untuk mengetahui pengaruh dari Kerr effect dengan membandingkan performansi serat optik kaca dan serat optik plastik berdasarkan format modulasi berupa NRZ, RZ, RZ-DPSK, RZ-DQPSK dan CSRZ. Terdapat dua skenario, dengan skenario pertama, variabel input yang diubah adalah format modulasi pada Mach-zehnder, sedangkan pada skenario kedua, variabel yang diubah adalah pemakaian serat optik yang dipakai, yaitu serat optik bahan kaca, plastik dan hybrid kaca plastik. Hasil simulasi menunjukkan dengan efek linier dan non-linier pada kabel kaca yang menghasilkan performansi jaringan dari yang terbaik, dengan Q factor di atas 6 dan BER di bawah 10-9 adalah NRZ, RZ, RZ-DPSK, CSRZ dan RZ-DQPSK. Sedangkan dengan penggunaan kabel PMMA, yang menunjukkan performansi jaringan yang baik adalah dengan konfigurasi G652D-G652D-PMMA pada format modulasi NRZ, RZ, RZ-DPSK dan RZ-DQPSK. Efek non-linier yang terjadi pada jaringan ini hanya SPM dan XPM.Kata kunci: FTTH, mach-zehnder, format modulasi, efek non-linier, GOF, POF.ABSTRACTIn this journal is creating a FTTH link on Optisystem software 7.0 to determine the effect of Kerr effect by comparing the performance of fiber optic glass and plastic optical fiber based on modulation formats such as NRZ, RZ, RZ-DPSK, RZ-DQPSK and CSRZ. There are two scenarios, first, input variables are changed based on format in Mach-zehnder modulator, while in the second scenario, the changed variable is the material of optical fiber, the materials are optical fiber glass, plastic and hybrid plastic and glass. The simulation results based on comparison with linear and nonlinear effects on glass optical fiber, which produce Q factor above 6 and BER below 10-9 are NRZ, RZ, RZ-DPSK, CSRZ and RZ-DQPSK. While the use of PMMA cable, which indicates good network performance is the configuration G652D-G652D-PMMA on the modulation format NRZ, RZ, RZ-DPSK and RZ

  15. Analisis Perbandingan Kinerja Mach-Zehnder berdasarkan Ragam Format Modulasi pada Jaringan FTTH

    Directory of Open Access Journals (Sweden)

    ZULIA NURUL KARIMAH

    2018-03-01

    Full Text Available ABSTRAK Pada jurnal ini dibuat pemodelan link FTTH pada software Optisystem 7.0 untuk mengetahui pengaruh dari Kerr effect dengan membandingkan performansi serat optik kaca dan serat optik plastik berdasarkan format modulasi berupa NRZ, RZ, RZ-DPSK, RZ-DQPSK dan CSRZ. Terdapat dua skenario, dengan skenario pertama, variabel input yang diubah adalah format modulasi pada Mach-zehnder, sedangkan pada skenario kedua, variabel yang diubah adalah pemakaian serat optik yang dipakai, yaitu serat optik bahan kaca, plastik dan hybrid kaca plastik. Hasil simulasi menunjukkan dengan efek linier dan non-linier pada kabel kaca yang menghasilkan performansi jaringan dari yang terbaik, dengan Q factor di atas 6 dan BER di bawah 10-9 adalah NRZ, RZ, RZ-DPSK, CSRZ dan RZ-DQPSK. Sedangkan dengan penggunaan kabel PMMA, yang menunjukkan performansi jaringan yang baik adalah dengan konfigurasi G652D-G652D-PMMA pada format modulasi NRZ, RZ, RZ-DPSK dan RZ-DQPSK. Efek non-linier yang terjadi pada jaringan ini hanya SPM dan XPM. Kata kunci: FTTH, mach-zehnder, format modulasi, efek non-linier, GOF, POF. ABSTRACT In this journal is creating a FTTH link on Optisystem software 7.0 to determine the effect of Kerr effect by comparing the performance of fiber optic glass and plastic optical fiber based on modulation formats such as NRZ, RZ, RZ-DPSK, RZ-DQPSK and CSRZ. There are two scenarios, first, input variables are changed based on format in Mach-zehnder modulator, while in the second scenario, the changed variable is the material of optical fiber, the materials are optical fiber glass, plastic and hybrid plastic and glass. The simulation results based on comparison with linear and nonlinear effects on glass optical fiber, which produce Q factor above 6 and BER below 10-9 are NRZ, RZ, RZ-DPSK, CSRZ and RZ-DQPSK. While the use of PMMA cable, which indicates good network performance is the configuration G652D-G652D-PMMA on the modulation format NRZ, RZ, RZ-DPSK and RZ

  16. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  17. Numerical simulation of unsteady compressible low Mach number flow in a channel

    Czech Academy of Sciences Publication Activity Database

    Punčochářová-Pořízková, P.; Kozel, Karel; Horáček, Jaromír; Fürst, J.

    2010-01-01

    Roč. 17, č. 2 (2010), s. 83-97 ISSN 1802-1484 R&D Projects: GA MŠk OC09019 Institutional research plan: CEZ:AV0Z20760514 Keywords : CFD * finite volume method * unsteady flow * low Mach number Subject RIV: BI - Acoustics

  18. The three-grating Mach-Zehnder optical interferometer: a tutorial approach using particle optics

    International Nuclear Information System (INIS)

    Miffre, A; Delhuille, R; Viaris Lesegno, B de; Buechner, M; Rizzo, C; Vigue, J

    2002-01-01

    In this paper, we present a tutorial set-up based on an optical three-grating Mach-Zehnder interferometer. As this apparatus is very similar in its principle to the Mach-Zehnder interferometers used with matter waves (neutrons, atoms and molecules), it can be used to familiarize students with particle optics, and in our explanations, we use the complementary points of view of wave optics and particle optics. Finally, we have used this interferometer to measure the index of refraction of BK7 glass for red light at 633 nm, with a technique equivalent to the one used to measure the index of refraction of solid matter for thermal neutrons. The dimensions of this interferometer and its cost make it very interesting for laboratory courses and the experiment described here can be reproduced by students

  19. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  20. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  1. Electrical crosstalk in integrated Mach-Zehnder modulators caused by a shared ground path

    NARCIS (Netherlands)

    Yao, W.; Gilardi, G.; Smit, M.K.; Wale, M.J.

    2015-01-01

    We show that the majority of electrical crosstalk between integrated Mach-Zehnder modulators can be caused by a shared ground path and demonstrate that in its absence crosstalk and related transmission penalty is greatly reduced.

  2. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  3. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  4. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  5. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  6. Study by the Prandtl-Glauert method of compressibility effects and critical Mach number for ellipsoids of various aspect ratios and thickness ratios

    Science.gov (United States)

    Hess, Robert V; Gardner, Clifford S

    1947-01-01

    By using the Prandtl-Glauert method that is valid for three-dimensional flow problems, the value of the maximum incremental velocity for compressible flow about thin ellipsoids at zero angle of attack is calculated as a function of the Mach number for various aspect ratios and thickness ratios. The critical Mach numbers of the various ellipsoids are also determined. The results indicate an increase in critical Mach number with decrease in aspect ratio which is large enough to explain experimental results on low-aspect-ratio wings at zero lift.

  7. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  8. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  9. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  10. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  11. Experimental Surface Pressure Data Obtained on 65 deg Delta Wing Across Reynolds Number and Mach Number Ranges. Volume 2; Small-Radius Leading Edge

    Science.gov (United States)

    Chu, Julio; Luckring, James M.

    1996-01-01

    An experimental wind tunnel test of a 65 deg. delta wing model with interchangeable leading edges was conducted in the Langley National Transonic Facility (NTF). The objective was to investigate the effects of Reynolds and Mach numbers on slender-wing leading-edge vortex flows with four values of wing leading-edge bluntness. Experimentally obtained pressure data are presented without analysis in tabulated and graphical formats across a Reynolds number range of 6 x 10(exp 6) to 84 x 10(exp 6) at a Mach number of 0.85 and across a Mach number range of 0.4 to 0.9 at Reynolds numbers of 6 x 10(exp 6) and 60 x 10(exp 6). Normal-force and pitching-moment coefficient plots for these Reynolds number and Mach number ranges are also presented.

  12. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  13. Dynamic effects on the transition between two-dimensional regular and Mach reflection of shock waves in an ideal, steady supersonic free stream

    CSIR Research Space (South Africa)

    Naidoo, K

    2011-06-01

    Full Text Available research by Ernst Mach in 1878. The steady, two-dimensional transition criteria between regular and Mach reflection are well established. There has been little done to consider the dynamic effect of a rapidly rotating wedge on the transition between regular...

  14. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  15. All-optical negabinary adders using Mach-Zehnder interferometer

    Science.gov (United States)

    Cherri, A. K.

    2011-02-01

    In contrast to optoelectronics, all-optical adders are proposed where all-optical signals are used to represent the input numbers and the control signals. In addition, the all-optical adders use the negabinary modified signed-digit number representation (an extension of the negabinary number system) to represent the input digits. Further, the ultra-speed of the designed circuits is achieved due to the use of ultra-fast all-optical switching property of the semiconductor optical amplifier and Mach-Zehnder interferometer (SOA-MZI). Furthermore, two-bit per digit binary encoding scheme is employed to represent the trinary values of the negabinary modified signed-digits.

  16. Electron density measurement of a colliding plasma using soft x-ray laser interferometry

    International Nuclear Information System (INIS)

    Wan, A.S.; Back, C.A.; Barbee, T.W.Jr.; Cauble, R.; Celliers, P.; DaSilva, L.B.; Glenzer, S.; Moreno, J.C.; Rambo, P.W.; Stone, G.F.; Trebes, J.E.; Weber, F.

    1996-05-01

    The understanding of the collision and subsequent interaction of counter-streaming high-density plasmas is important for the design of indirectly-driven inertial confinement fusion (ICF) hohlraums. We have employed a soft x-ray Mach-Zehnder interferometer, using a Ne- like Y x-ray laser at 155 angstrom as the probe source, to study interpenetration and stagnation of two colliding plasmas. We observed a peaked density profile at the symmetry axis with a wide stagnation region with width of order 100 μm. We compare the measured density profile with density profiles calculated by the radiation hydrodynamic code LASNEX and a multi-specie fluid code which allows for interpenetration. The measured density profile falls in between the calculated profiles using collisionless and fluid approximations. By using different target materials and irradiation configurations, we can vary the collisionality of the plasma. We hope to use the soft x-ray laser interferometry as a mechanism to validate and benchmark our numerical codes used for the design and analysis of high-energy- density physics experiments

  17. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  18. Radiation Hard Silicon Photonics Mach-Zehnder Modulator for HEP applications: all-Synopsys SentaurusTM Pre-Irradiation Simulation

    CERN Document Server

    Cammarata, Simone

    2017-01-01

    Silicon Photonics may well provide the opportunity for new levels of integration between detectors and their readout electronics. This technology is thus being evaluated at CERN in order to assess its suitability for use in particle physics experiments. In order to check the agreement with measurements and the validity of previous device simulations, a pure Synopsys SentaurusTM simulation of an un-irradiated Mach-Zehnder silicon modulator has been carried out during the Summer Student project. Index Terms—Silicon Photonics, Mach-Zehnder modulator, electro-optic simulation, Synopsys SentaurusTM, electro-optic measurement, HEP.

  19. Regulation of bacterioplankton density and biomass in tropical shallow coastal lagoons

    Directory of Open Access Journals (Sweden)

    Fabiana MacCord

    Full Text Available AIM: Estimating bacterioplankton density and biomass and their regulating factors is important in order to evaluate aquatic systems' carrying capacity, regarding bacterial growth and the stock of matter in the bacterial community, which can be consumed by higher trophic levels. We aim to evaluate the limnological factors which regulate - in space and time - the bacterioplankton dynamics (abundance and biomass in five tropical coastal lagoons in the state of Rio de Janeiro, Brazil. METHOD: The current study was carried out at the following lagoons: Imboassica, Cabiúnas, Comprida, Carapebus and Garças. They differ in morphology and in their main limnological factors. The limnological variables as well as bacterioplankton abundance and biomass were monthly sampled for 14 months. Model selection analyses were performed in order to evaluate the main variables regulating the bacterioplankton's dynamics in these lagoons. RESULT: The salt concentration and the "space" factor (i.e. different lagoons explained great part of the bacterial density and biomass variance in the studied tropical coastal lagoons. When the lagoons were analyzed separately, salinity still explained great part of the variation of bacterial density and biomass in the Imboassica and Garças lagoons. On the other hand, phosphorus concentration was the main factor explaining the variance of bacterial density and biomass in the distrophic Cabiúnas, Comprida and Carapebus lagoons. There was a strong correlation between bacterial density and biomass (r² = 0.70, p < 0.05, indicating that bacterial biomass variations are highly dependent on bacterial density variations. CONCLUSION: (i Different limnological variables regulate the bacterial density and biomass in the studied coastal lagoons, (ii salt and phosphorus concentrations greatly explained the variation of bacterial density and biomass in the saline and distrophic lagoons, respectively, and (iii N-nitrate and chlorophyll

  20. Boundary-Layer Instability Measurements in a Mach-6 Quiet Tunnel

    Science.gov (United States)

    Berridge, Dennis C.; Ward, Christopher, A. C.; Luersen, Ryan P. K.; Chou, Amanda; Abney, Andrew D.; Schneider, Steven P.

    2012-01-01

    Several experiments have been performed in the Boeing/AFOSR Mach-6 Quiet Tunnel at Purdue University. A 7 degree half angle cone at 6 degree angle of attack with temperature-sensitive paint (TSP) and PCB pressure transducers was tested under quiet flow. The stationary crossflow vortices appear to break down to turbulence near the lee ray for sufficiently high Reynolds numbers. Attempts to use roughness elements to control the spacing of hot streaks on a flared cone in quiet flow did not succeed. Roughness was observed to damp the second-mode waves in areas influenced by the roughness, and wide roughness spacing allowed hot streaks to form between the roughness elements. A forward-facing cavity was used for proof-of-concept studies for a laser perturber. The lowest density at which the freestream laser perturbations could be detected was 1.07 x 10(exp -2) kilograms per cubic meter. Experiments were conducted to determine the transition characteristics of a streamwise corner flow at hypersonic velocities. Quiet flow resulted in a delayed onset of hot streak spreading. Under low Reynolds number flow hot streak spreading did not occur along the model. A new shock tube has been built at Purdue. The shock tube is designed to create weak shocks suitable for calibrating sensors, particularly PCB-132 sensors. PCB-132 measurements in another shock tube show the shock response and a linear calibration over a moderate pressure range.

  1. Mach Number effects on turbulent superstructures in wall bounded flows

    Science.gov (United States)

    Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven

    2017-11-01

    Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.

  2. Quantum nonlocality of photon pairs in interference in a Mach-Zehnder interferometer

    Czech Academy of Sciences Publication Activity Database

    Trojek, P.; Peřina ml., Jan

    2003-01-01

    Roč. 53, č. 4 (2003), s. 335-349 ISSN 0011-4626 R&D Projects: GA MŠk LN00A015 Institutional research plan: CEZ:AV0Z1010921 Keywords : entangled photon pairs * nonlocal interference * Mach-Zehender interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.263, year: 2003

  3. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  4. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  5. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Brant E.; Stark, Dan P. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Ellis, Richard S. [Department of Astronomy, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Dunlop, James S.; McLure, Ross J.; McLeod, Derek, E-mail: brant@email.arizona.edu [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% at redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  6. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    International Nuclear Information System (INIS)

    Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; McLeod, Derek

    2014-01-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% at redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program

  7. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    Science.gov (United States)

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  8. Hypersonic research engine project. Phase 2: Preliminary report on the performance of the HRE/AIM at Mach 6

    Science.gov (United States)

    Sun, Y. H.; Sainio, W. C.

    1975-01-01

    Test results of the Aerothermodynamic Integration Model are presented. A program was initiated to develop a hydrogen-fueled research-oriented scramjet for operation between Mach 3 and 8. The primary objectives were to investigate the internal aerothermodynamic characteristics of the engine, to provide realistic design parameters for future hypersonic engine development as well as to evaluate the ground test facility and testing techniques. The engine was tested at the NASA hypersonic tunnel facility with synthetic air at Mach 5, 6, and 7. The hydrogen fuel was heated up to 1500 R prior to injection to simulate a regeneratively cooled system. The engine and component performance at Mach 6 is reported. Inlet performance compared very well both with theory and with subscale model tests. Combustor efficiencies up to 95 percent were attained at an equivalence ratio of unity. Nozzle performance was lower than expected. The overall engine performance was computed using two different methods. The performance was also compared with test data from other sources.

  9. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  10. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  11. Nonlinear theory of nonstationary low Mach number channel flows of freely cooling nearly elastic granular gases.

    Science.gov (United States)

    Meerson, Baruch; Fouxon, Itzhak; Vilenkin, Arkady

    2008-02-01

    We employ hydrodynamic equations to investigate nonstationary channel flows of freely cooling dilute gases of hard and smooth spheres with nearly elastic particle collisions. This work focuses on the regime where the sound travel time through the channel is much shorter than the characteristic cooling time of the gas. As a result, the gas pressure rapidly becomes almost homogeneous, while the typical Mach number of the flow drops well below unity. Eliminating the acoustic modes and employing Lagrangian coordinates, we reduce the hydrodynamic equations to a single nonlinear and nonlocal equation of a reaction-diffusion type. This equation describes a broad class of channel flows and, in particular, can follow the development of the clustering instability from a weakly perturbed homogeneous cooling state to strongly nonlinear states. If the heat diffusion is neglected, the reduced equation becomes exactly soluble, and the solution develops a finite-time density blowup. The blowup has the same local features at singularity as those exhibited by the recently found family of exact solutions of the full set of ideal hydrodynamic equations [I. Fouxon, Phys. Rev. E 75, 050301(R) (2007); I. Fouxon,Phys. Fluids 19, 093303 (2007)]. The heat diffusion, however, always becomes important near the attempted singularity. It arrests the density blowup and brings about previously unknown inhomogeneous cooling states (ICSs) of the gas, where the pressure continues to decay with time, while the density profile becomes time-independent. The ICSs represent exact solutions of the full set of granular hydrodynamic equations. Both the density profile of an ICS and the characteristic relaxation time toward it are determined by a single dimensionless parameter L that describes the relative role of the inelastic energy loss and heat diffusion. At L>1 the intermediate cooling dynamics proceeds as a competition between "holes": low-density regions of the gas. This competition resembles Ostwald

  12. The interaction between the spatial distribution of resource patches and population density: consequences for intraspecific growth and morphology.

    Science.gov (United States)

    Jacobson, Bailey; Grant, James W A; Peres-Neto, Pedro R

    2015-07-01

    How individuals within a population distribute themselves across resource patches of varying quality has been an important focus of ecological theory. The ideal free distribution predicts equal fitness amongst individuals in a 1 : 1 ratio with resources, whereas resource defence theory predicts different degrees of monopolization (fitness variance) as a function of temporal and spatial resource clumping and population density. One overlooked landscape characteristic is the spatial distribution of resource patches, altering the equitability of resource accessibility and thereby the effective number of competitors. While much work has investigated the influence of morphology on competitive ability for different resource types, less is known regarding the phenotypic characteristics conferring relative ability for a single resource type, particularly when exploitative competition predominates. Here we used young-of-the-year rainbow trout (Oncorhynchus mykiss) to test whether and how the spatial distribution of resource patches and population density interact to influence the level and variance of individual growth, as well as if functional morphology relates to competitive ability. Feeding trials were conducted within stream channels under three spatial distributions of nine resource patches (distributed, semi-clumped and clumped) at two density levels (9 and 27 individuals). Average trial growth was greater in high-density treatments with no effect of resource distribution. Within-trial growth variance had opposite patterns across resource distributions. Here, variance decreased at low-population, but increased at high-population densities as patches became increasingly clumped as the result of changes in the levels of interference vs. exploitative competition. Within-trial growth was related to both pre- and post-trial morphology where competitive individuals were those with traits associated with swimming capacity and efficiency: larger heads/bodies/caudal fins

  13. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  14. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  15. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  16. The realization of an integrated Mach-Zehnder waveguide immunosensor in silicon technology

    NARCIS (Netherlands)

    Schipper, E.F.; Schipper, E.F.; Brugman, A.M.; Lechuga, L.M.; Lechuga, L.M.; Kooyman, R.P.H.; Greve, Jan; Dominguez, C.

    1997-01-01

    We describe the realization of a symmetric integrated channel waveguide Mach-Zehnder sensor which uses the evanescent field to detect small refractive-index changes (¿nmin ¿ 1 × 10¿4) near the guiding-layer surface. This guiding layer consists of ridge structures with a height of 3 nm and a width of

  17. Surfing and drift acceleration at high mach number quasi-perpendicular shocks

    International Nuclear Information System (INIS)

    Amano, T.

    2008-01-01

    Electron acceleration in high Mach number collisionless shocks relevant to supernova remnant is discussed. By performing one- and two-dimensional particle-in-cell simulations of quasi-perpendicular shocks, we find that energetic electrons are quickly generated in the shock transition region through shock surfing and drift acceleration. The electron energization is strong enough to account for the observed injection at supernova remnant shocks. (author)

  18. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  19. The Dynamics of Very High Alfvén Mach Number Shocks in Space Plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Torbjörn; Burgess, David [School of Physics and Astronomy, Queen Mary University of London, London, E1 4NS (United Kingdom); Scholer, Manfred [Max-Planck-Institut für extraterrestrische Physik, Garching (Germany); Masters, Adam [The Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2AZ (United Kingdom); Sulaiman, Ali H., E-mail: torbjorn.sundberg@gmail.com [Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242 (United States)

    2017-02-10

    Astrophysical shocks, such as planetary bow shocks or supernova remnant shocks, are often in the high or very-high Mach number regime, and the structure of such shocks is crucial for understanding particle acceleration and plasma heating, as well inherently interesting. Recent magnetic field observations at Saturn’s bow shock, for Alfvén Mach numbers greater than about 25, have provided evidence for periodic non-stationarity, although the details of the ion- and electron-scale processes remain unclear due to limited plasma data. High-resolution, multi-spacecraft data are available for the terrestrial bow shock, but here the very high Mach number regime is only attained on extremely rare occasions. Here we present magnetic field and particle data from three such quasi-perpendicular shock crossings observed by the four-spacecraft Cluster mission. Although both ion reflection and the shock profile are modulated at the upstream ion gyroperiod timescale, the dominant wave growth in the foot takes place at sub-proton length scales and is consistent with being driven by the ion Weibel instability. The observed large-scale behavior depends strongly on cross-scale coupling between ion and electron processes, with ion reflection never fully suppressed, and this suggests a model of the shock dynamics that is in conflict with previous models of non-stationarity. Thus, the observations offer insight into the conditions prevalent in many inaccessible astrophysical environments, and provide important constraints for acceleration processes at such shocks.

  20. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  1. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  2. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  3. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  4. In-pipe aerodynamic characteristics of a projectile in comparison with free flight for transonic Mach numbers

    Science.gov (United States)

    Hruschka, R.; Klatt, D.

    2018-03-01

    The transient shock dynamics and drag characteristics of a projectile flying through a pipe 3.55 times larger than its diameter at transonic speed are analyzed by means of time-of-flight and pipe wall pressure measurements as well as computational fluid dynamics (CFD). In addition, free-flight drag of the 4.5-mm-pellet-type projectile was also measured in a Mach number range between 0.5 and 1.5, providing a means for comparison against in-pipe data and CFD. The flow is categorized into five typical regimes the in-pipe projectile experiences. When projectile speed and hence compressibility effects are low, the presence of the pipe has little influence on the drag. Between Mach 0.5 and 0.8, there is a strong drag increase due to the presence of the pipe, however, up to a value of about two times the free-flight drag. This is exactly where the nose-to-base pressure ratio of the projectile becomes critical for locally sonic speed, allowing the drag to be estimated by equations describing choked flow through a converging-diverging nozzle. For even higher projectile Mach numbers, the drag coefficient decreases again, to a value slightly below the free-flight drag at Mach 1.5. This behavior is explained by a velocity-independent base pressure coefficient in the pipe, as opposed to base pressure decreasing with velocity in free flight. The drag calculated by CFD simulations agreed largely with the measurements within their experimental uncertainty, with some discrepancies remaining for free-flying projectiles at supersonic speed. Wall pressure measurements as well as measured speeds of both leading and trailing shocks caused by the projectile in the pipe also agreed well with CFD.

  5. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  6. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  7. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  8. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  9. The Interaction of Boltzmann with Mach, Ostwald and Planck, and his influence on Nernst and Einstein

    International Nuclear Information System (INIS)

    Broda, E.

    1981-01-01

    Boltzmann esteemed both Mach and Ostwald personally and as experimentalists, but consistently fought them in epistemology. He represented atomism and realism against energism and positivism. In the early period Boltzmann also had to struggle against Planck as a phenomenologist, but he welcomed his quantum hypothesis. As a scientist Nernst was also under Boltzmann's influence. Einstein learned atomism from (Maxwell and) Boltzmann. After Einstein had overcome Mach's positivist influence, he unknowingly approached Boltzmann's philosophical views. Some sociopolitlcal aspects of the lives of the great physicists will be discussed. It will be shown how they all, and many of Boltzmann's most eminent students, in one way or other conflicted with evil tendencies and developments in existing society. (author)

  10. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  11. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  12. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  13. Acoustic-hydrodynamic-flame coupling—A new perspective for zero and low Mach number flows

    Science.gov (United States)

    Pulikkottil, V. V.; Sujith, R. I.

    2017-04-01

    A combustion chamber has a hydrodynamic field that convects the incoming fuel and oxidizer into the chamber, thereby causing the mixture to react and produce heat energy. This heat energy can, in turn, modify the hydrodynamic and acoustic fields by acting as a source and thereby, establish a positive feedback loop. Subsequent growth in the amplitude of the acoustic field variables and their eventual saturation to a limit cycle is generally known as thermo-acoustic instability. Mathematical representation of these phenomena, by a set of equations, is the subject of this paper. In contrast to the ad hoc models, an explanation of the flame-acoustic-hydrodynamic coupling, based on fundamental laws of conservation of mass, momentum, and energy, is presented in this paper. In this paper, we use a convection reaction diffusion equation, which, in turn, is derived from the fundamental laws of conservation to explain the flame-acoustic coupling. The advantage of this approach is that the physical variables such as hydrodynamic velocity and heat release rate are coupled based on the conservation of energy and not based on an ad hoc model. Our approach shows that the acoustic-hydrodynamic interaction arises from the convection of acoustic velocity fluctuations by the hydrodynamic field and vice versa. This is a linear mechanism, mathematically represented as a convection operator. This mechanism resembles the non-normal mechanism studied in hydrodynamic theory. We propose that this mechanism could relate the instability mechanisms of hydrodynamic and thermo-acoustic systems. Furthermore, the acoustic-hydrodynamic interaction is shown to be responsible for the convection of entropy disturbances from the inlet of the chamber. The theory proposed in this paper also unifies the observations in the fields of low Mach number flows and zero Mach number flows. In contrast to the previous findings, where compressibility is shown to be causing different physics for zero and low Mach

  14. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  15. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  16. Genetic and environmental variances of bone microarchitecture and bone remodeling markers: a twin study.

    Science.gov (United States)

    Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego

    2015-03-01

    All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ  = 0.49) than DZ (rDZ  = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than

  17. Experimental investigation of liquid jet injection into Mach 6 hypersonic crossflow

    Energy Technology Data Exchange (ETDEWEB)

    Beloki Perurena, J. [von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese (Belgium)]|[RWTH Aachen University, Shock Wave Laboratory, Aachen (Germany); Asma, C.O. [von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese (Belgium)]|[Ghent University, Department of Flow, Heat and Combustion Mechanics, Ghent (Belgium); Theunissen, R. [von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese (Belgium)]|[Delft University of Technology, Faculty of Aerospace Engineering, Delft (Netherlands); Chazot, O. [von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese (Belgium)

    2009-03-15

    The injection of a liquid jet into a crossing Mach 6 air flow is investigated. Experiments were conducted on a sharp leading edge flat plate with flush mounted injectors. Water jets were introduced through different nozzle shapes at relevant jet-to-air momentum-flux ratios. Sufficient temporal resolution to capture small scale effects was obtained by high-speed recording, while directional illumination allowed variation in field of view. Shock pattern and flow topology were visualized by Schlieren-technique. Correlations are proposed on relating water jet penetration height and lateral extension with the injection ratio and orifice diameter for circular injector jets. Penetration height and lateral extension are compared for different injector shapes at relevant jet-to-air momentum-flux ratios showing that penetration height and lateral extension decrease and increase, respectively, with injector's aspect ratio. Probability density function analysis has shown that the mixing of the jet with the crossflow is completed at a distance of x/d{sub j}{proportional_to} 40, independent of the momentum-flux ratio. Mean velocity profiles related with the liquid jet have been extracted by means of an ensemble correlation PIV algorithm. Finally, frequency analyses of the jet breakup and fluctuating shock pattern are performed using a fast Fourier algorithm and characteristic Strouhal numbers of St=0.18 for the liquid jet breakup and of St=0.011 for the separation shock fluctuation are obtained. (orig.)

  18. High-energy-density physics researches based on pulse power technology

    International Nuclear Information System (INIS)

    Horioka, Kazuhiko; Nakajima, Mitsuo; Kawamura, Tohru; Sasaki, Toru; Kondo, Kotaro; Yano, Yuuri

    2006-01-01

    Plasmas driven by pulse power device are of interest, concerning the researches on high-energy-density (HED) physics. Dense plasmas are produced using pulse power driven exploding discharges in water. Experimental results show that the wire plasma is tamped and stabilized by the surrounding water and it evolves through a strongly coupled plasma state. A shock-wave-heated, high temperature plasma is produced in a compact pulse power device. Experimental results show that strong shock waves can be produced in the device. In particular, at low initial pressure condition, the shock Mach number reaches 250 and this indicates that the shock heated region is dominated by radiation processes. (author)

  19. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  20. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  1. Low Mach number analysis of idealized thermoacoustic engines with numerical solution.

    Science.gov (United States)

    Hireche, Omar; Weisman, Catherine; Baltean-Carlès, Diana; Le Quéré, Patrick; Bauwens, Luc

    2010-12-01

    A model of an idealized thermoacoustic engine is formulated, coupling nonlinear flow and heat exchange in the heat exchangers and stack with a simple linear acoustic model of the resonator and load. Correct coupling results in an asymptotically consistent global model, in the small Mach number approximation. A well-resolved numerical solution is obtained for two-dimensional heat exchangers and stack. The model assumes that the heat exchangers and stack are shorter than the overall length by a factor of the order of a representative Mach number. The model is well-suited for simulation of the entire startup process, whereby as a result of some excitation, an initially specified temperature profile in the stack evolves toward a near-steady profile, eventually reaching stationary operation. A validation analysis is presented, together with results showing the early amplitude growth and approach of a stationary regime. Two types of initial excitation are used: Random noise and a small periodic wave. The set of assumptions made leads to a heat-exchanger section that acts as a source of volume but is transparent to pressure and to a local heat-exchanger model characterized by a dynamically incompressible flow to which a locally spatially uniform acoustic pressure fluctuation is superimposed.

  2. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  4. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  5. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  6. Ernst Mach and George Sarton's Successors: The Implicit Role Model of Teaching Science in USA and Elsewhere, Part II

    Science.gov (United States)

    Siemsen, Hayo

    2013-05-01

    George Sarton had a strong influence on modern history of science. The method he pursued throughout his life was the method he had discovered in Ernst Mach's Mechanics when he was a student in Ghent. Sarton was in fact throughout his life implementing a research program inspired by the epistemology of Mach. Sarton in turn inspired many others in several generations (James Conant, Thomas Kuhn, Gerald Holton, etc.). What were the origins of these ideas in Mach and what can this origin tell us about the history of science and science education nowadays? Which ideas proved to be successful and which ones need to be improved upon? The following article will elaborate the epistemological questions, which Charles Darwin's "Origin" raised concerning human knowledge and scientific knowledge and which led Mach to adapt the concept of what is "empirical" in contrast to metaphysical a priori assumptions a second time after Galileo. On this basis Sarton proposed "genesis and development" as the major goal of his journal Isis. Mach had elaborated this epistemology in La Connaissance et l'Erreur ( Knowledge and Error), which Sarton read in 1911 (Hiebert in Knowledge and error. Reidel, Dordrecht, 1976; de Mey in George Sarton centennial. Communication & Cognition, Ghent, pp. 3-6, 1984). Accordingly for Sarton, history becomes not only a subject of science, but a method of science education. Culture—and science as part of culture—is a result of a genetic process. History of science shapes and is shaped by science and science education in a reciprocal process. Its epistemology needs to be adapted to scientific facts and the philosophy of science. Sarton was well aware of the need to develop the history of science and the philosophy of science along the lines of this reciprocal process. It was a very fruitful basis, but a specific part of it Sarton did not elaborate further, namely the erkenntnis-theory and psychology of science education. This proved to be a crucial missing

  7. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  8. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  9. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  10. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  11. PWFA plasma source - interferometric diagnostics for Li vapor density measurements

    International Nuclear Information System (INIS)

    Sivakumaran, V.; Mohandas, K.K.; Singh, Sneha; Ravi Kumar, A.V.

    2015-01-01

    A prototype (40 cm long) plasma source based on Li heat pipe oven has been developed for the Plasma Wakefield Acceleration (PWFA) experiments at IPR (IPR), Gujarat as a part of the ongoing Accelerator Programme. Li vapor in the oven is produced by heating solid Li in helium buffer gas. A uniform column of Li plasma is generated by UV photo ionization (193 nm) of the Li vapor in the heat pipe oven. In these experiments, an accurate measurement of Li vapor density is important as it has got a direct consequence on the plasma electron density. In the present experiment, the vapor density is measured optically by using Hook method (spectrally resolved white light interferometry). The hook like structure formed near the vicinity of the Li 670.8 nm resonance line was recorded with a white light Mach Zehnder interferometer crossed with an imaging spectrograph to estimate the Li vapor density. The vapor density measurements have been carried out as a function of external oven temperature and the He buffer gas pressure. This technique has the advantage of being insensitive to line broadening and line shape, and its high dynamic range even with optically thick absorption line. Here, we present the line integrated Lithium vapor density measurement using Hook method and also compare the same with other optical diagnostic techniques (White light absorption and UV absorption) for Li vapor density measurements. (author)

  12. Influences of mach number and flow incidence on aerodynamic losses of steam turbine blade

    International Nuclear Information System (INIS)

    Yoo, Seok Jae; Ng, Wing Fai

    2000-01-01

    An experiment was conducted to investigate the aerodynamic losses of high pressure steam turbine nozzle (526A) subjected to a large range of incident angles (-34 .deg. to 26 .deg. ) and exit Mach numbers (0.6 and 1.15). Measurements included downstream pitot probe traverses, upstream total pressure, and endwall static pressures. Flow visualization techniques such as shadowgraph and color oil flow visualization were performed to complement the measured data. When the exit Mach number for nozzles increased from 0.9 to 1.1 the total pressure loss coefficient increased by a factor of 7 as compared to the total pressure losses measured at subsonic conditions (M 2 <0.9). For the range of incidence tested, the effect of flow incidence on the total pressure losses is less pronounced. Based on the shadowgraphs taken during the experiment, it's believed that the large increase in losses at transonic conditions is due to strong shock/ boundary layer interaction that may lead to flow separation on the blade suction surface

  13. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  14. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  15. Use of Vortex Generators to Reduce Distortion for Mach 1.6 Streamline-Traced Supersonic Inlets

    Science.gov (United States)

    Baydar, Ezgihan; Lu, Frank; Slater, John W.; Trefny, Chuck

    2016-01-01

    Reduce the total pressure distortion at the engine-fan face due to low-momentum flow caused by the interaction of an external terminal shock at the turbulent boundary layer along a streamline-traced external-compression (STEX) inlet for Mach 1.6.

  16. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  17. Analytical and Experimental Investigation of Inlet-engine Matching for Turbojet-powered Aircraft at Mach Numbers up to 2.0

    Science.gov (United States)

    Esenwein, Fred T; Schueller, Carl F

    1952-01-01

    An analysis of inlet-turbojet-engine matching for a range of Mach numbers up to 2.0 indicates large performance penalties when fixed-geometry inlets are used. Use of variable-geometry inlets, however, nearly eliminates th The analysis was confirmed experimentally by investigating at Mach numbers of 0, 0.63, and 1.5 to 2.0 two single oblique-shock-type inlets of different compression-ramp angles, which simulated a variable-geometry configuration. The experimental investigation indicated that total-pressure recoveries comparable withose attainable with well designed nose inlets were obtained with the side inlets when all the boundary layer ahead of the inlets was removed. Serious drag penalties resulted at a Mach number of 2.0 from the use of blunt-cowl leading edges. However, sharp-lip inlets produced large losses in thrust for the take-off condition. These thrust penalties which are associated with the the low-speed operation of the sharp-lip inlet designs can probably be avoided without impairing the supersonic performance of the inlet by the use of auxiliary inlets or blow-in doors.

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  19. Plasma wave profiles of Earth's bow shock at low Mach number: ISEE 3 observations on the far flank

    International Nuclear Information System (INIS)

    Greenstadt, E.W.; Coroniti, F.V.; Moses, S.L.; Smith, E.J.

    1992-01-01

    The Earth's bow shock is weak along its distant flanks where the projected component of solar wind velocity normal to the hyperboloidal surface is only a fraction of the total free stream velocity, severely reducing the local Mach number. The authors present a survey of selected crossings far downstream from the subsolar shock, delineating the overall plasma wave (pw) behavior of a selected set of nearly perpendicular crossings and another set of limited Mach number but broad geometry; they include their immediate upstream regions. The result is a generalizable pw signature, or signatures, of low Mach number shocks and some likely implications of those signatures for the weak shock's plasma physical processes on the flank. They find the data consistent with the presence of ion beam interactions producing noise ahead of the shock in the ion acoustic frequency range. One subcritical case was found whose pw noise was presumably related to a reflected ion population just as in stronger events. The presence or absence, and the amplitudes, of pw activity are explainable by the presence or absence of a population of upstream ions controlled by the component of interplanetary magnetic field normal to the solar wind flow

  20. Single-pulse measurement of density and temperature in a turbulent, supersonic flow using UV laser spectroscopy

    Science.gov (United States)

    Fletcher, D. G.; Mckenzie, R. L.

    1992-01-01

    Nonintrusive measurements of density and temperature and their turbulent fluctuation levels have been obtained in the boundary layer of an unseeded, Mach 2 wind tunnel flow. The spectroscopic technique that was used to make the measurements is based on the combination of laser-induced oxygen fluorescence and Raman scattering by oxygen and nitrogen from the same laser pulse. Results from this demonstration experiment compare favorably with previous measurements obtained in the same facility from conventional probes and an earlier spectroscopic technique.

  1. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  2. On-chip Mach-Zehnder interferometer for OCT systems

    Science.gov (United States)

    van Leeuwen, Ton G.; Akca, Imran B.; Angelou, Nikolaos; Weiss, Nicolas; Hoekman, Marcel; Leinse, Arne; Heideman, Rene G.

    2018-04-01

    By using integrated optics, it is possible to reduce the size and cost of a bulky optical coherence tomography (OCT) system. One of the OCT components that can be implemented on-chip is the interferometer. In this work, we present the design and characterization of a Mach-Zehnder interferometer consisting of the wavelength-independent splitters and an on-chip reference arm. The Si3N4 was chosen as the material platform as it can provide low losses while keeping the device size small. The device was characterized by using a home-built swept source OCT system. A sensitivity value of 83 dB, an axial resolution of 15.2 μm (in air) and a depth range of 2.5 mm (in air) were all obtained.

  3. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  4. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  5. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  6. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  7. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  8. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  9. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  10. Asymmetric Mach-Zehnder Interferometer Based Biosensors for Aflatoxin M1 Detection.

    Science.gov (United States)

    Chalyan, Tatevik; Guider, Romain; Pasquardini, Laura; Zanetti, Manuela; Falke, Floris; Schreuder, Erik; Heideman, Rene G; Pederzolli, Cecilia; Pavesi, Lorenzo

    2016-01-06

    In this work, we present a study of Aflatoxin M1 detection by photonic biosensors based on Si₃N₄ Asymmetric Mach-Zehnder Interferometer (aMZI) functionalized with antibodies fragments (Fab'). We measured a best volumetric sensitivity of 10⁴ rad/RIU, leading to a Limit of Detection below 5 × 10(-7) RIU. On sensors functionalized with Fab', we performed specific and non-specific sensing measurements at various toxin concentrations. Reproducibility of the measurements and re-usability of the sensor were also investigated.

  11. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  12. Consistency of the Mach principle and the gravitational-to-inertial mass equivalence principle

    International Nuclear Information System (INIS)

    Granada, Kh.K.; Chubykalo, A.E.

    1990-01-01

    Kinematics of the system, composed of two bodies, interacting with each other according to inverse-square law, was investigated. It is shown that the Mach principle, earlier rejected by the general relativity theory, can be used as an alternative for the absolute space concept, if it is proposed, that distant star background dictates both inertial and gravitational mass of a body

  13. Tunable microwave photonic filter free from baseband and carrier suppression effect not requiring single sideband modulation using a Mach-Zenhder configuration.

    Science.gov (United States)

    Mora, José; Ortigosa-Blanch, Arturo; Pastor, Daniel; Capmany, José

    2006-08-21

    We present a full theoretical and experimental analysis of a novel all-optical microwave photonic filter combining a mode-locked fiber laser and a Mach-Zenhder structure in cascade to a 2x1 electro-optic modulator. The filter is free from the carrier suppression effect and thus it does not require single sideband modulation. Positive and negative coefficients are obtained inherently in the system and the tunability is achieved by controlling the optical path difference of the Mach-Zenhder structure.

  14. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  15. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  16. Ulysses observations of a 'density hole' in the high-speed solar wind

    International Nuclear Information System (INIS)

    Riley, P.; Gosling, J.T.; McComas, D.J.; Forsyth, R.J.

    1998-01-01

    Ulysses observations at mid and high heliographic latitudes have revealed a solar wind devoid of the large variations in density, temperature, and speed that are commonly observed at low latitudes. One event, however, observed on May 1, 1996, while Ulysses was located at ∼3.7AU and 38.5 degree, stands out in the plasma data set. The structure, which is unique in the Ulysses high-latitude data set, is seen as a drop in proton density of almost an order of magnitude and a comparable rise in proton temperature. The event lasts ∼3(1)/(2) hours giving the structure a size of ∼9.6x10 6 km (0.06 AU) along the spacecraft trajectory. Minimum variance analysis of this interval indicates that the angle between the average magnetic field direction and the minimum variance direction is ∼92 degree, suggesting that the 'density hole' may be approximated by a series of planar slabs separated by several tangential discontinuities. We discuss several possible explanations for the origin of this structure, but ultimately the origin of the density hole remains unknown. copyright 1998 American Geophysical Union

  17. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  18. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  19. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  20. One-state vector formalism for the evolution of a quantum state through nested Mach-Zehnder interferometers

    Czech Academy of Sciences Publication Activity Database

    Bartkiewicz, K.; Černoch, A.; Javůrek, D.; Lemr, K.; Soubusta, Jan; Svozilík, J.

    2015-01-01

    Roč. 91, č. 1 (2015), "012103-1"-"012103-4" ISSN 1050-2947 Institutional support: RVO:68378271 Keywords : one-state vector * quantum state * Mach-Zehnder interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.808, year: 2014

  1. The Red Rectangle: An Astronomical Example of Mach Bands?

    Science.gov (United States)

    Brecher, K.

    2005-12-01

    Recently, the Hubble Space Telescope (HST) produced spectacular images of the "Red Rectangle". This appears to be a binary star system undergoing recurrent mass loss episodes. The image-processed HST photographs display distinctive diagonal lightness enhancements. Some of the visual appearance undoubtedly arises from actual variations in the luminosity distribution of the light of the nebula itself, i.e., due to limb brightening. Psychophysical enhancement similar to the Vasarely or pyramid effect also seems to be involved in the visual impression conveyed by the HST images. This effect is related to Mach bands (as well as to the Chevreul and Craik-O'Brien-Cornsweet effects). The effect can be produced by stacking concentric squares (or other geometrical figures such as rectangles or hexagons) of linearly increasing or decreasing size and lightness, one on top of another. We have constructed controllable Flash applets of this effect as part of the NSF supported "Project LITE: Light Inquiry Through Experiments". They can be found in the vision section of the LITE web site at http://lite.bu.edu. Mach band effects have previously been seen in medical x-ray images. Here we report for the first time the possibility that such effects play a role in the interpretation of astronomical images. Specifically, we examine to what extent the visual impressions of the Red Rectangle and other extended astronomical objects are purely physical (photometric) in origin and to what degree they are enhanced by psychophysical processes. To help assess the relative physical and psychophysical contributions to the perceived lightness effects, we have made use of a center-surround (Difference of Gaussians) filter we developed for MatLab. We conclude that local (lateral inhibition) and longer range human visual perception effects probably do contribute to the lightness features seen in astronomical objects like the Red Rectangle. Project LITE is supported by NSF Grant # DUE-0125992.

  2. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  3. Angular dependence of high Mach number plasma interactions

    International Nuclear Information System (INIS)

    Thomas, V.A.; Brecht, S.H.

    1987-01-01

    In this paper a 2-1/2-dimensional hybrid code is used to examine the collisionless large spatial scale (kc/ω pi ∼ 1) low-frequency (ω ∼ ω ci ) interaction initiated by a plasma shell of finite width traveling at high Alfven Mach number relative to a uniform background plasma. Particular attention is given to the angle of the relative velocity relative to the ambient magnetic field for the range of angles O < θ < π/2. An attempt is made to parameterize some of the important physics including the Alfven ion cyclotron instability, the field-aligned electromagnetic ion counter streaming instability, mixing of the plasma shell with the background ions, and structuring of the interaction region. These results are applicable to various astrophysical interactions such as bow shocks and interplanetary shocks

  4. Spatio-temporal variance and meteorological drivers of the urban heat island in a European city

    Science.gov (United States)

    Arnds, Daniela; Böhner, Jürgen; Bechtel, Benjamin

    2017-04-01

    Urban areas are especially vulnerable to high temperatures, which will intensify in the future due to climate change. Therefore, both good knowledge about the local urban climate as well as simple and robust methods for its projection are needed. This study has analysed the spatio-temporal variance of the mean nocturnal urban heat island (UHI) of Hamburg, with observations from 40 stations from different suppliers. The UHI showed a radial gradient with about 2 K in the centre mostly corresponding to the urban densities. Temporarily, it has a strong seasonal cycle with the highest values between April and September and an inter-annual variability of approximately 0.5 K. Further, synoptic meteorological drivers of the UHI were analysed, which generally is most pronounced under calm and cloud-free conditions. Considered were meteorological parameters such as relative humidity, wind speed, cloud cover and objective weather types. For the stations with the highest UHI intensities, up to 68.7 % of the variance could be explained by seasonal empirical models and even up to 76.6 % by monthly models.

  5. Single Mode SU8 Polymer Based Mach-Zehnder Interferometer for Bio-Sensing Application

    Science.gov (United States)

    Boiragi, Indrajit; Kundu, Sushanta; Makkar, Roshan; Chalapathi, Krishnamurthy

    2011-10-01

    This paper explains the influence of different parameters to the sensitivity of an optical waveguide Mach-Zehnder Interferometer (MZI) for real time detection of biomolecules. The sensing principle is based on the interaction of evanescence field with the biomolecules that get immobilized on sensing arm. The sensitivity has been calculated by varying the sensing window length, wavelength and concentration of bio-analyte. The maximum attainable sensitivity for the preferred design is the order of 10-8 RIU at 840 nm wavelength with a sensing window length of 1cm. All the simulation work has been carried out with Opti-BPMCAD for the optimization of MZI device parameters. The SU8 polymers are used as a core and clad material to fabricate the waveguide. The refractive index of cladding layer is optimized by varying the curing temperature for a fixed time period and the achieved index difference between core and clad is Δn = 0.0151. The fabricated MZI device has been characterized with LASER beam profiler at 840 nm wavelength. This study demonstrates the effectiveness of the different parameter to the sensitivity of a single mode optical waveguide Mach-Zehnder Interferometer for bio-sensing application.

  6. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  7. About the parametric interplay between ionic mach number, body-size, and satellite potential in determining the ion depletion in the wake of the S3-2 Satellite

    International Nuclear Information System (INIS)

    Samir, U.; Wildman, P.J.; Rich, F.; Brinton, H.C.; Sagalyn, R.C.

    1981-01-01

    Measurements of ion current, electron temperature, and density and values of satellite potential from the U.S. Air Force Satellite S3-2 together with ion composition measurements from the Atmosphere Explorer (AE-E) satellite were used to examine the variation of the ratio α = [I/sub +/(wake)]/[I/sub +/(ambient)] (where I/sub +/ is the ion current) with altitude and to examine the significance of the parametric interplay between ionic Mach number, normalized body size R/sub D/( = R0/lambda/sub D/, where R 0 is the satellite radius and lambda/sub D/ is the ambient debye length) and normalized body potenital phi/sub N/( = ephis/KT/sub e/, where phi/sub s/ is the satellite potential, T/sub e/ is the electron temperature, and e and K are constants). It was possible to separate between the influence of R/sub D/ and phi/sub N/ on α for a specific range parameters. Uncertainty, however, remains regarding the competiton between R/sub D/ and S(H + ) and S(O + ) are oxygen and hydrogen ionic Mach numbers, respectively) in determining the ion distribution in the nearest vicincity to the satellite surface. A brief discussion relevant to future experiments in the area of body plasma flow interactions to be conducted on board the Shuttle/Spacelab facility, is also included

  8. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    Science.gov (United States)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  9. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  10. Highly stable polarization independent Mach-Zehnder interferometer

    Energy Technology Data Exchange (ETDEWEB)

    Mičuda, Michal, E-mail: micuda@optics.upol.cz; Doláková, Ester; Straka, Ivo; Miková, Martina; Dušek, Miloslav; Fiurášek, Jaromír; Ježek, Miroslav, E-mail: jezek@optics.upol.cz [Department of Optics, Faculty of Science, Palacký University, 17. listopadu 1192/12, 77146 Olomouc (Czech Republic)

    2014-08-15

    We experimentally demonstrate optical Mach-Zehnder interferometer utilizing displaced Sagnac configuration to enhance its phase stability. The interferometer with footprint of 27×40 cm offers individually accessible paths and shows phase deviation less than 0.4° during a 250 s long measurement. The phase drift, evaluated by means of Allan deviation, stays below 3° or 7 nm for 1.5 h without any active stabilization. The polarization insensitive design is verified by measuring interference visibility as a function of input polarization. For both interferometer's output ports and all tested polarization states the visibility stays above 93%. The discrepancy in visibility for horizontal and vertical polarization about 3.5% is caused mainly by undesired polarization dependence of splitting ratio of the beam splitter used. The presented interferometer device is suitable for quantum-information and other sensitive applications where active stabilization is complicated and common-mode interferometer is not an option as both the interferometer arms have to be accessible individually.

  11. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  12. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  13. Effect of Pressure Gradients on the Initiation of PBX-9502 via Irregular (Mach) Reflection of Low Pressure Curved Shock Waves

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Lawrence Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Phillip Isaac [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Moro, Erik Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-28

    In the instance of multiple fragment impact on cased explosive, isolated curved shocks are generated in the explosive. These curved shocks propagate and may interact and form irregular or Mach reflections along the interaction loci, thereby producing a single shock that may be sufficient to initiate PBX-9501. However, the incident shocks are divergent and their intensity generally decreases as they expand, and the regions behind the Mach stem interaction loci are generally unsupported and allow release waves to rapidly affect the flow. The effects of release waves and divergent shocks may be considered theoretically through a “Shock Change Equation”.

  14. Classical and quantum non-linear optical applications using the Mach-Zehnder interferometer

    Science.gov (United States)

    Prescod, Andru

    Mach Zehnder (MZ) modulators are widely employed in a variety of applications, such as optical communications, optical imaging, metrology and encryption. In this dissertation, we explore two non-linear MZ applications; one classified as classical and one as quantum, in which the Mach Zehnder interferometer is used. In the first application, a classical non-linear application, we introduce and study a new electro-optic highly linear (e.g., >130 dB) modulator configuration. This modulator makes use of a phase modulator (PM) in one arm of the MZ interferometer (MZI) and a ring resonator (RR) located on the other arm. The modulator performance is obtained through the control of a combination of internal and external parameters. These parameters include the RR-coupling ratio (internal parameter); the RF power split ratio and the RF phase bias (external parameters). Results show the unique and superior features, such as high linearity (SFDR˜133 dB), modulation bandwidth extension (as much as 70%) over the previously proposed and demonstrated Resonator-Assisted Mach Zehnder (RAMZ) design. Furthermore the proposed electro-optic modulator of this dissertation also provides an inherent SFDR compensation capability, even in cases where a significant waveguide optical loss exists. This design also shows potential for increased flexibility, practicality and ease of use. In the second application, a quantum non-linear application, we experimentally demonstrate quantum optical coherence tomography (QOCT) using a type II non-linear crystal (periodically-poled potassium titanyl phosphate (KTiOPO4) or PPKTP). There have been several publications discussing the merits and disadvantages of QOCT compared to OCT and other imaging techniques. First, we discuss the issues and solutions for increasing the efficiency of the quantum entangled photons. Second, we use a free space QOCT experiment to generate a high flux of these quantum entangled photons in two orthogonal polarizations, by

  15. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  16. NASA Administrator Sean O'Keefe, left, learned about the Mach 10 X-43 research vehicle from manager

    Science.gov (United States)

    2002-01-01

    NASA Administrator Sean O'Keefe left, learned about the Mach 10 X-43 research vehicle from manager, Joel Sitz during O'Keefe's visit to the NASA Dryden Flight Research Center, Edwards, California, January 31, 2002.

  17. Measurements of density, temperature, and their fluctuations in turbulent supersonic flow using UV laser spectroscopy

    Science.gov (United States)

    Fletcher, Douglas G.; Mckenzie, R. L.

    1992-01-01

    Nonintrusive measurements of density, temperature, and their turbulent fluctuation levels were obtained in the boundary layer of an unseeded, Mach 2 wind tunnel flow. The spectroscopic technique that was used to make the measurements is based on the combination of laser-induced oxygen fluorescence and Raman scattering by oxygen and nitrogen from the same laser pulse. Results from this demonstration experiment are compared with previous measurements obtained in the same facility using conventional probes and an earlier spectroscopic technique. Densities and temperatures measured with the current technique agree with the previous surveys to within 3 percent and 2 percent, respectively. The fluctuation amplitudes for both variables agree with the measurements obtained using the earlier spectroscopic technique and show evidence of an unsteady, weak shock wave that perturbs the boundary layer.

  18. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  19. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  20. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  4. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Growth rates and variances of unexploited wolf populations in dynamic equilibria

    Science.gov (United States)

    Mech, L. David; Fieberg, John

    2015-01-01

    Several states have begun harvesting gray wolves (Canis lupus), and these states and various European countries are closely monitoring their wolf populations. To provide appropriate perspective for determining unusual or extreme fluctuations in their managed wolf populations, we analyzed natural, long-term, wolf-population-density trajectories totaling 130 years of data from 3 areas: Isle Royale National Park in Lake Superior, Michigan, USA; the east-central Superior National Forest in northeastern Minnesota, USA; and Denali National Park, Alaska, USA. Ratios between minimum and maximum annual sizes for 2 mainland populations (n = 28 and 46 yr) varied from 2.5–2.8, whereas for Isle Royale (n = 56 yr), the ratio was 6.3. The interquartile range (25th percentile, 75th percentile) for annual growth rates, Nt+1/Nt, was (0.88, 1.14), (0.92, 1.11), and (0.86, 1.12) for Denali, Superior National Forest, and Isle Royale respectively. We fit a density-independent model and a Ricker model to each time series, and in both cases we considered the potential for observation error. Mean growth rates from the density-independent model were close to 0 for all 3 populations, with 95% credible intervals including 0. We view the estimated model parameters, including those describing annual variability or process variance, as providing useful summaries of the trajectories of these populations. The estimates of these natural wolf population parameters can serve as benchmarks for comparison with those of recovering wolf populations. Because our study populations were all from circumscribed areas, fluctuations in them represent fluctuations in densities (i.e., changes in numbers are not confounded by changes in occupied area as would be the case with populations expanding their range, as are wolf populations in many states).

  6. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  7. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  8. The temporal variability of species densities

    International Nuclear Information System (INIS)

    Redfearn, A.; Pimm, S.L.

    1993-01-01

    Ecologists use the term 'stability' to mean to number of different things (Pimm 1984a). One use is to equate stability with low variability in population density over time (henceforth, temporal variability). Temporal variability varies greatly from species to species, so what effects it? There are at least three sets of factors: the variability of extrinsic abiotic factors, food web structure, and the intrinsic features of the species themselves. We can measure temporal variability using at least three statistics: the coefficient of variation of density (CV); the standard deviation of the logarithms of density (SDL); and the variance in the differences between logarithms of density for pairs of consecutive years (called annual variability, hence AV, b y Wolda 1978). There are advantages and disadvantages to each measure (Williamson 1984), though in our experience, the measures are strongly correlated across sets of taxonomically related species. The increasing availability of long-term data sets allows one to calculate these statistics for many species and so to begin to understand the various causes of species differences in temporal variability

  9. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  10. Thermodynamic analysis on optimum performance of scramjet engine at high Mach numbers

    International Nuclear Information System (INIS)

    Zhang, Duo; Yang, Shengbo; Zhang, Silong; Qin, Jiang; Bao, Wen

    2015-01-01

    In order to predict the maximum performance of scramjet engine at flight conditions with high freestream Mach numbers, a thermodynamic model of Brayton cycle was utilized to analyze the effects of inlet pressure ratio, fuel equivalence ratio and the upper limit of gas temperature to the specific thrust and the fuel impulse of the scramjet considering the characteristics of non-isentropic compression in the inlet. The results show that both the inlet efficiency and the temperature limit in the combustor have remarkable effects on the overall engine performances. Different with the ideal Brayton cycles assuming isentropic compression without upper limit of gas temperature, both the maximum specific thrust and the maximum fuel impulse of a scramjet present non-monotonic trends against the fuel equivalence ratio in this study. Considering the empirical design efficiencies of inlet, there is a wide range of fuel equivalence ratios in which the fuel impulses remain at high values. Moreover, the maximum specific thrust can also be achieved with a fuel equivalence ratio near this range. Therefore, it is possible to achieve an overall high performance in a scramjet at high Mach numbers. - Highlights: • Thermodynamic analysis with Brayton cycle on overall performances of scramjet. • The compression loss in the inlet was considered in predicting scram-mode operation. • Non-monotonic trends of engine performances against fuel equivalence ratio.

  11. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  12. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  13. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  14. Double-pass Mach-Zehnder fiber interferometer pH sensor.

    Science.gov (United States)

    Tou, Zhi Qiang; Chan, Chi Chiu; Hong, Jesmond; Png, Shermaine; Eddie, Khay Ming Tan; Tan, Terence Aik Huang

    2014-04-01

    A biocompatible fiber-optic pH sensor based on a unique double-pass Mach-Zehnder interferometer is proposed. pH responsive poly(2-hydroxyethyl methacrylate-co-2-(dimethylamino)ethyl methacrylate) hydrogel coating on the fiber swells/deswells in response to local pH, leading to refractive index changes that manifest as shifting of interference dips in the optical spectrum. The pH sensor is tested in spiked phosphate buffer saline and demonstrates high sensitivity of 1.71  nm/pH, pH 0.004 limit of detection with good responsiveness, repeatability, and stability. The proposed sensor has been successfully applied in monitoring the media pH in cell culture experiments to investigate the relationship between pH and cancer cell growth.

  15. On the Use of a Virtual Mach-Zehnder Interferometer in the Teaching of Quantum Mechanics

    Science.gov (United States)

    Pereira, Alexsandro; Ostermann, Fernanda; Cavalcanti, Claudio

    2009-01-01

    For many students, the conceptual learning of quantum mechanics can be rather painful owing to the counter-intuitive nature of quantum phenomena. In order to enhance students' understanding of the odd behaviour of photons and electrons, we introduce a computational simulation of the Mach-Zehnder interferometer, developed by our research group. An…

  16. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  17. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  18. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  19. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  20. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  1. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  2. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  3. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  4. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  5. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  6. Refractometric sensor based on all-fiber coaxial Michelson and Mach-Zehnder interferometers for ethanol detection in fuel

    International Nuclear Information System (INIS)

    Mosquera, L; Osorio, Jonas H; Hayashi, Juliano G; Cordeiro, Cristiano M B

    2011-01-01

    A refractometric sensor based on mechanically induced interferometers formed with long period gratings is reported. It is also shown two different setups based on a Michelson and Mach-Zehnder interferometer and its application to measure ethanol concentration in gasoline.

  7. Linear and nonlinear development of controlled disturbances in the supersonic boundary layer on a swept wing at Mach 2.5

    International Nuclear Information System (INIS)

    Kolosov, G L; Kosinov, A D

    2016-01-01

    Experimental data on the linear and nonlinear wave train development in 3D supersonic boundary layer over a 45° swept-wing at Mach number 2.5 are presented. Travelling artificial disturbances were introduced in the boundary layer by periodical glow discharge at frequencies 10 and 20 kHz. The spatial-temporal and spectral-wave characteristics of the wave train of unstable disturbances in the linear region are obtained. It is shown that the additional peaks in β '-spectra arise for both subharmonic and fundamental frequencies. The experiments indicate the presence of subharmonic resonance mechanism in 3D boundary layer at Mach number 2.5. (paper)

  8. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  9. A measurement of the turbulence-driven density distribution in a non-star-forming molecular cloud

    Energy Technology Data Exchange (ETDEWEB)

    Ginsburg, Adam; Darling, Jeremy [CASA, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Federrath, Christoph, E-mail: Adam.G.Ginsburg@gmail.com [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Vic 3800 (Australia)

    2013-12-10

    Molecular clouds are supersonically turbulent. This turbulence governs the initial mass function and the star formation rate. In order to understand the details of star formation, it is therefore essential to understand the properties of turbulence, in particular the probability distribution of density in turbulent clouds. We present H{sub 2}CO volume density measurements of a non-star-forming cloud along the line of sight toward W49A. We use these measurements in conjunction with total mass estimates from {sup 13}CO to infer the shape of the density probability distribution function. This method is complementary to measurements of turbulence via the column density distribution and should be applicable to any molecular cloud with detected CO. We show that turbulence in this cloud is probably compressively driven, with a compressive-to-total Mach number ratio b=M{sub C}/M>0.4. We measure the standard deviation of the density distribution, constraining it to the range 1.5 < σ {sub s} < 1.9, assuming that the density is lognormally distributed. This measurement represents an essential input into star formation laws. The method of averaging over different excitation conditions to produce a model of emission from a turbulent cloud is generally applicable to optically thin line observations.

  10. Application of soft x-ray laser interferometry to study large-scale-length, high-density plasmas

    International Nuclear Information System (INIS)

    Wan, A.S.; Barbee, T.W., Jr.; Cauble, R.

    1996-01-01

    We have employed a Mach-Zehnder interferometer, using a Ne-like Y x- ray laser at 155 Angstrom as the probe source, to study large-scale- length, high-density colliding plasmas and exploding foils. The measured density profile of counter-streaming high-density colliding plasmas falls in between the calculated profiles using collisionless and fluid approximations with the radiation hydrodynamic code LASNEX. We have also performed simultaneous measured the local gain and electron density of Y x-ray laser amplifier. Measured gains in the amplifier were found to be between 10 and 20 cm -1 , similar to predictions and indicating that refraction is the major cause of signal loss in long line focus lasers. Images showed that high gain was produced in spots with dimensions of ∼ 10 μm, which we believe is caused by intensity variations in the optical drive laser. Measured density variations were smooth on the 10-μm scale so that temperature variations were likely the cause of the localized gain regions. We are now using the interferometry technique as a mechanism to validate and benchmark our numerical codes used for the design and analysis of high-energy-density physics experiments. 11 refs., 6 figs

  11. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  12. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  13. Application of Higher Order Fission Matrix for Real Variance Estimation in McCARD Monte Carlo Eigenvalue Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-05-15

    In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.

  14. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  15. Self-referencing Mach-Zehnder interferometer as a laser system diagnostic: Active and adaptive optical systems

    International Nuclear Information System (INIS)

    Feldman, M.; Mockler, D.J.; English, R.E. Jr.; Byrd, J.L.; Salmon, J.T.

    1991-01-01

    We are incorporating a novel self-referencing Mach-Zehnder interferometer into a large scale laser system as a real time, interactive diagnostic tool for wavefront measurement. The instrument is capable of absolute wavefront measurements accurate to better than λ/10 pv over a wavelength range > 300 nm without readjustment of the optical components. This performance is achieved through the design of both refractive optics and catadioptric collimator to achromatize the Mach-Zehnder reference arm. Other features include polarization insensitivity through the use of low angles of incidence on all beamsplitters as well as an equal path length configuration that allows measurement of either broad-band or closely spaced laser-line sources. Instrument accuracy is periodically monitored in place by means of a thermally and mechanically stable wavefront reference source that is calibrated off-line with a phase conjugate interferometer. Video interferograms are analyzed using Fourier transform techniques on a computer that includes dedicated array processor. Computer and video networks maintain distributed interferometers under the control of a single analysis computer with multiple user access. 7 refs., 11 figs

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  18. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  19. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  20. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....

  1. Goedel, Penrose, anti-Mach: extra supersymmetries of time-dependent plane waves

    International Nuclear Information System (INIS)

    Blau, Matthias; O'Loughlin, Martin; Meessen, Patrick

    2003-01-01

    We prove that M-theory plane waves with extra supersymmetries are necessarily homogeneous (but possibly time-dependent), and we show by explicit construction that such time-dependent plane waves can admit extra supersymmetries. To that end we study the Penrose limits of Goedel-like metrics, show that the Penrose limit of the M-theory Goedel metric (with 20 supercharges) is generically a time-dependent homogeneous plane wave of the anti-Mach type, and display the four extra Killings spinors in that case. We conclude with some general remarks on the Killing spinor equations for homogeneous plane waves. (author)

  2. The Influence of Ernst Mach and Ludwig Boltzmann on Albert Einstein

    International Nuclear Information System (INIS)

    Broda, E.

    1979-01-01

    This document, written by Engelbert Broda in 1979, analyses the influence of Ernst Mach and Ludwig Boltzmann on Albert Einstein. Broda describes how Einstein and his scientific thinking benefited from Mach’s criticism on classical mechanics and its basic concepts like absolute time and absolute space. This criticism encouraged Einstein in the time he worked on his special relativity. On the other side Broda writes about the influence of Ludwig Boltzman, an atomist, whose scientific work and research prepared the ground for Einsteins work on the quantum-structure of electromagnetic radiation or the discovery of the photoelectric effect. (nowak)

  3. Goedel, Penrose, anti-Mach: extra supersymmetries of time-dependent plane waves

    Energy Technology Data Exchange (ETDEWEB)

    Blau, Matthias; O' Loughlin, Martin; Meessen, Patrick [SISSA/ISAS, Via Beirut 2-4, 34014 Trieste (Italy)]. E-mail: meessen@sissa.it

    2003-09-01

    We prove that M-theory plane waves with extra supersymmetries are necessarily homogeneous (but possibly time-dependent), and we show by explicit construction that such time-dependent plane waves can admit extra supersymmetries. To that end we study the Penrose limits of Goedel-like metrics, show that the Penrose limit of the M-theory Goedel metric (with 20 supercharges) is generically a time-dependent homogeneous plane wave of the anti-Mach type, and display the four extra Killings spinors in that case. We conclude with some general remarks on the Killing spinor equations for homogeneous plane waves. (author)

  4. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  5. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  6. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  7. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  8. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhukhovitskii, D. I., E-mail: dmr@ihed.ras.ru; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N. [Joint Institute of High Temperatures, Russian Academy of Sciences, Izhorskaya 13, Bd. 2, 125412 Moscow (Russian Federation); Thomas, H. M. [Research Group Complex Plasma, DLR, Oberpfaffenhofen, 82234 Wessling (Germany); Ivlev, A. V.; Morfill, G. E. [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, 85748 Garching (Germany); Schwabe, M. [Department of Chemical and Biomolecular Engineering, Graves Lab, D75 Tan Hall, University of California, Berkeley, CA 94720 (United States)

    2015-02-15

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.

  9. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    International Nuclear Information System (INIS)

    Zhukhovitskii, D. I.; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N.; Thomas, H. M.; Ivlev, A. V.; Morfill, G. E.; Schwabe, M.

    2015-01-01

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas

  10. Stable Control of Firing Rate Mean and Variance by Dual Homeostatic Mechanisms.

    Science.gov (United States)

    Cannon, Jonathan; Miller, Paul

    2017-12-01

    Homeostatic processes that provide negative feedback to regulate neuronal firing rates are essential for normal brain function. Indeed, multiple parameters of individual neurons, including the scale of afferent synapse strengths and the densities of specific ion channels, have been observed to change on homeostatic time scales to oppose the effects of chronic changes in synaptic input. This raises the question of whether these processes are controlled by a single slow feedback variable or multiple slow variables. A single homeostatic process providing negative feedback to a neuron's firing rate naturally maintains a stable homeostatic equilibrium with a characteristic mean firing rate; but the conditions under which multiple slow feedbacks produce a stable homeostatic equilibrium have not yet been explored. Here we study a highly general model of homeostatic firing rate control in which two slow variables provide negative feedback to drive a firing rate toward two different target rates. Using dynamical systems techniques, we show that such a control system can be used to stably maintain a neuron's characteristic firing rate mean and variance in the face of perturbations, and we derive conditions under which this happens. We also derive expressions that clarify the relationship between the homeostatic firing rate targets and the resulting stable firing rate mean and variance. We provide specific examples of neuronal systems that can be effectively regulated by dual homeostasis. One of these examples is a recurrent excitatory network, which a dual feedback system can robustly tune to serve as an integrator.

  11. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  12. Investigation of Mach-Zehnder interferometer properties based on PLC technology

    Science.gov (United States)

    Ren, Mei-zhen; Zhang, Jia-shun; An, Jun-ming; Wang, Yue; Wang, Liang-liang; Li, Jian-guang; Wu, Yuan-da; Yin, Xiao-jie; Hu, Xiong-wei

    2018-05-01

    We report investigations of three types of silica-based thermo-optic modulating Mach-Zehnder interferometers (MZIs). They are widely used in optical communication and quantum photonics. Three types of MZIs are fabricated. The waveguide structure and fabrication process are paid special attention. The power consumption is less than 250 mW for all MZIs. The polarization dependent loss (PDL) at the same attenuation using the upper heater is less than that using the lower heater for the three types of MZIs. In addition, it is found that the PDL at the same attenuation increases gradually for π, 2π and 0 phase differences. The measured response time of the three types of MZIs is less than 1.8 ms.

  13. Numerical solutions of unsteady flows with low inlet Mach numbers

    Czech Academy of Sciences Publication Activity Database

    Punčochářová, Petra; Furst, Jiří; Horáček, Jaromír; Kozel, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1795-1805 ISSN 0378-4754 R&D Projects: GA AV ČR IAA200760613 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite volume method * unsteady flow * low Mach number * viscous compressible fluid Subject RIV: BI - Acoustics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6V0T-4Y0D67D-1-R&_cdi=5655&_user=640952&_pii=S0378475409003607&_origin=search&_coverDate=04%2F30%2F2010&_sk=999199991&view=c&wchp=dGLbVlb-zSkzk&md5=ed6eaf0a050968ee978714fd54e7f131&ie=/sdarticle.pdf

  14. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  15. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  16. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  17. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  18. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  19. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  20. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    CERN Document Server

    Öz, E.; Muggli, P.

    2016-01-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE)~\\cite{bib:awake} project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook~\\cite{bib:Hook} method and has been described in great detail in the work by W. Tendell Hill et. al.~\\cite{bib:densitymeter}. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of $1\\%$ for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prot...

  1. The factors controlling species density in herbaceous plant communities: An assessment

    Science.gov (United States)

    Grace, J.B.

    1999-01-01

    This paper evaluates both the ideas and empirical evidence pertaining to the control of species density in herbaceous plant communities. While most theoretical discussions of species density have emphasized the importance of habitat productivity and disturbance regimes, many other factors (e.g. species pools, plant litter accumulation, plant morphology) have been proposed to be important. A review of literature presenting observations on the density of species in small plots (in the vicinity of a few square meters or less), as well as experimental studies, suggests several generalizations: (1) Available data are consistent with an underlying unimodal relationship between species density and total community biomass. While variance in species density is often poorly explained by predictor variables, there is strong evidence that high levels of community biomass are antagonistic to high species density. (2) Community biomass is just one of several factors affecting variations in species density. Multivariate analyses typically explain more than twice as much variance in species density as can be explained by community biomass alone. (3) Disturbance has important and sometimes complex effects on species density. In general, the evidence is consistent with the intermediate disturbance hypothesis but exceptions exist and effects can be complex. (4) Gradients in the species pool can have important influences on patterns of species density. Evidence is mounting that a considerable amount of the observed variability in species density within a landscape or region may result from environmental effects on the species pool. (5) Several additional factors deserve greater consideration, including time lags, species composition, plant morphology, plant density and soil microbial effects. Based on the available evidence, a conceptual model of the primary factors controlling species density is presented here. This model suggests that species density is controlled by the effects of

  2. Temperature effects of Mach-Zehnder interferometer using a liquid crystal-filled fiber

    DEFF Research Database (Denmark)

    Ho, Bo-Yan; Su, Hsien-Pin; Tseng, Yu-Pei

    2015-01-01

    We demonstrated a simple and cost-effective method to fabricate all fiber Mach-Zehnder interferometer (MZI) based on cascading a short section of liquid crystal (LC)-filled hollow-optic fiber (HOF) between two single mode fibers by using automatically splicing technique. The transmission spectra...... of the proposed MZI with different LC-infiltrated length were measured and the temperature-induced wavelength shifts of the interference fringes were recorded. Both blue shift and red shift were observed, depending the temperature range. Based on our experimental results, interference fringe was observed...

  3. Signal transmission in a human body medium-based body sensor network using a Mach-Zehnder electro-optical sensor.

    Science.gov (United States)

    Song, Yong; Hao, Qun; Zhang, Kai; Wang, Jingwen; Jin, Xuefeng; Sun, He

    2012-11-30

    The signal transmission technology based on the human body medium offers significant advantages in Body Sensor Networks (BSNs) used for healthcare and the other related fields. In previous works we have proposed a novel signal transmission method based on the human body medium using a Mach-Zehnder electro-optical (EO) sensor. In this paper, we present a signal transmission system based on the proposed method, which consists of a transmitter, a Mach-Zehnder EO sensor and a corresponding receiving circuit. Meanwhile, in order to verify the frequency response properties and determine the suitable parameters of the developed system, in-vivo measurements have been implemented under conditions of different carrier frequencies, baseband frequencies and signal transmission paths. Results indicate that the proposed system will help to achieve reliable and high speed signal transmission of BSN based on the human body medium.

  4. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  5. Comparison between Hydrogen and Methane Fuels in a 3-D Scramjet at Mach 8

    Science.gov (United States)

    2016-06-24

    scramjet using a cavity based flame holder in the T4 shock tunnel at The University of Queensland, as well as a companion fundamental CFD study. The...shock tunnel. 15. SUBJECT TERMS Airbreathing Engines, Hypersonics , Propulsion, AOARD 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...Report Comparison between hydrogen, methane and ethylene fuels in a 3-D Scramjet at Mach 8 Professor Michael K. Smart Chair of Hypersonic Propulsion

  6. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  7. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  8. A two-dimensional, TVD numerical scheme for inviscid, high Mach number flows in chemical equilibrium

    Science.gov (United States)

    Eberhardt, S.; Palmer, G.

    1986-01-01

    A new algorithm has been developed for hypervelocity flows in chemical equilibrium. Solutions have been achieved for Mach numbers up to 15 with no adverse effect on convergence. Two methods of coupling an equilibrium chemistry package have been tested, with the simpler method proving to be more robust. Improvements in boundary conditions are still required for a production-quality code.

  9. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  10. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  11. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  12. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  13. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  14. Third-order linearization for self-beating filtered microwave photonic systems using a dual parallel Mach-Zehnder modulator.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Capmany, José; Fandiño, Javier S; Muñoz, Pascual; Alavi, Hossein

    2016-09-05

    We develop, analyze and apply a linearization technique based on dual parallel Mach-Zehnder modulator to self-beating microwave photonics systems. The approach enables broadband low-distortion transmission and reception at expense of a moderate electrical power penalty yielding a small optical power penalty (<1 dB).

  15. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  16. The dawn–dusk asymmetry of ion density in the dayside magnetosheath and its annual variability measured by THEMIS

    Directory of Open Access Journals (Sweden)

    A. P. Dimmock

    2016-05-01

    Full Text Available The local and global plasma properties in the magnetosheath play a fundamental role in regulating solar wind–magnetosphere coupling processes. However, the magnetosheath is a complex region to characterise as it has been shown theoretically, observationally and through simulations that plasma properties are inhomogeneous, non-isotropic and asymmetric about the Sun-Earth line. To complicate matters, dawn–dusk asymmetries are sensitive to various changes in the upstream conditions on an array of timescales. The present paper focuses exclusively on dawn–dusk asymmetries, in particularly that of ion density. We present a statistical study using THEMIS data of the dawn–dusk asymmetry of ion density in the dayside magnetosheath and its long-term variations between 2009 and 2015. Our data suggest that, in general, the dawn-side densities are higher, and the asymmetry grows from noon towards the terminator. This trend was only observed close to the magnetopause and not in the central magnetosheath. In addition, between 2009 and 2015, the largest asymmetry occurred around 2009 decreasing thereafter. We also concluded that no single parameter such as the Alfvén Mach number, plasma velocity, or the interplanetary magnetic field strength could exclusively account for the observed asymmetry. Interestingly, the dependence on Alfvén Mach number differed between data sets from different time periods. The asymmetry obtained in the THEMIS data set is consistent with previous studies, but the solar cycle dependence was opposite to an analysis based on IMP-8 data. We discuss the physical mechanisms for this asymmetry and its temporal variation. We also put the current results into context with the existing literature in order to relate THEMIS era measurements to those made during earlier solar cycles.

  17. Spatial mapping of humeral head bone density.

    Science.gov (United States)

    Alidousti, Hamidreza; Giles, Joshua W; Emery, Roger J H; Jeffers, Jonathan

    2017-09-01

    Short-stem humeral replacements achieve fixation by anchoring to the metaphyseal trabecular bone. Fixing the implant in high-density bone can provide strong fixation and reduce the risk of loosening. However, there is a lack of data mapping the bone density distribution in the proximal humerus. The aim of the study was to investigate the bone density in proximal humerus. Eight computed tomography scans of healthy cadaveric humeri were used to map bone density distribution in the humeral head. The proximal humeral head was divided into 12 slices parallel to the humeral anatomic neck. Each slice was then divided into 4 concentric circles. The slices below the anatomic neck, where short-stem implants have their fixation features, were further divided into radial sectors. The average bone density for each of these regions was calculated, and regions of interest were compared using a repeated-measures analysis of variance with significance set at P density was found to decrease from proximal to distal regions, with the majority of higher bone density proximal to the anatomic neck of the humerus (P density increases from central to peripheral regions, where cortical bone eventually occupies the space (P density distribution in the medial calcar region was also observed. This study indicates that it is advantageous with respect to implant fixation to preserve some bone above the anatomic neck and epiphyseal plate and to use the denser bone at the periphery. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Digital integrated control of a Mach 2.5 mixed-compression supersonic inlet and an augmented mixed-flow turbofan engine

    Science.gov (United States)

    Batterton, P. G.; Arpasi, D. J.; Baumbick, R. J.

    1974-01-01

    A digitally implemented integrated inlet-engine control system was designed and tested on a mixed-compression, axisymmetric, Mach 2.5, supersonic inlet with 45 percent internal supersonic area contraction and a TF30-P-3 augmented turbofan engine. The control matched engine airflow to available inlet airflow. By monitoring inlet terminal shock position and over-board bypass door command, the control adjusted engine speed so that in steady state, the shock would be at the desired location and the overboard bypass doors would be closed. During engine-induced transients, such as augmentor light-off and cutoff, the inlet operating point was momentarily changed to a more supercritical point to minimize unstarts. The digital control also provided automatic inlet restart. A variable inlet throat bleed control, based on throat Mach number, provided additional inlet stability margin.

  19. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  20. Optical Coupling Structures of Fiber-Optic Mach-Zehnder Interferometers Using CO2 Laser Irradiation

    Directory of Open Access Journals (Sweden)

    Chien-Hsing Chen

    2014-01-01

    Full Text Available The Mach-Zehnder interferometer (MZI can be used to test changes in the refractive index of sucrose solutions at different concentrations. However, the popularity of this measurement tool is limited by its substantial size and portability. Therefore, the MZI was integrated with a small fiber-optic waveguide component to develop an interferometer with fiber-optic characteristics, specifically a fiber-optic Mach-Zehnder interferometer (FO-MZI. Optical fiber must be processed to fabricate two optical coupling structures. The two optical coupling structures are a duplicate of the beam splitter, an optical component of the interferometer. Therefore, when the sensor length and the two optical coupling structures vary, the time or path for optical transmission in the sensor changes, thereby influencing the back-end interference signals. The researchers successfully developed an asymmetrical FO-MZI with sensing abilities. The spacing value between the troughs of the sensor length and interference signal exhibited an inverse relationship. In addition, image analysis was employed to examine the size-matching relationship between various sensor lengths and the coupling and decoupling structure. Furthermore, the spectral wavelength shift results measured using a refractive index sensor indicate that FO-MZIs with a sensor length of 38 mm exhibited excellent sensitivity, measuring 59.7 nm/RIU.

  1. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  2. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  3. Relaxation of potential, flows, and density in the edge plasma of Castor tokamak

    International Nuclear Information System (INIS)

    Hron, M.; Weinzettl, V.; Dufkova, E.; Duran, I.; Stoeckel, J.; Hidalgo, C.

    2004-01-01

    Decay times of plasma flows and plasma profiles have been measured after a sudden biasing switch-off in experiments on the Castor tokamak. A biased electrode has been used to polarize the edge plasma. The edge plasma potential and flows have been characterized by means of Langmuir and Mach probes, the radiation was measured using an array of bolometers. Potential profiles and poloidal flows can be well fitted by an exponential decay time in the range of 10 - 30 μs when the electrode biasing is turned off in the Castor tokamak. The radiation shows a slower time scale (about 1 ms), which is linked to the evolution in the plasma density and particle confinement. (authors)

  4. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  5. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  6. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  7. Nozzle design study for a quasi-axisymmetric scramjet-powered vehicle at Mach 7.9 flight conditions

    Science.gov (United States)

    Tanimizu, Katsuyoshi; Mee, David J.; Stalker, Raymond J.; Jacobs, Peter A.

    2013-09-01

    A nozzle shape optimization study for a quasi-axisymmetric scramjet has been performed for a Mach 7.9 operating condition with hydrogen fuel, aiming at the application of a hypersonic airbreathing vehicle. In this study, the nozzle geometry which is parameterized by a set of design variables, is optimized for the single objective of maximum net thrust using an in-house CFD solver for inviscid flowfields with a simple force prediction methodology. The combustion is modelled using a simple chemical reaction code. The effects of the nozzle design on the overall vehicle performance are discussed. For the present geometry, net thrust is achieved for the optimized vehicle design. The results of the nozzle-optimization study show that performance is limited by the nozzle area ratio that can be incorporated into the vehicle without leading to too large a base diameter of the vehicle and increasing the external drag of the vehicle. This study indicates that it is very difficult to achieve positive thrust at Mach 7.9 using the basic geometry investigated.

  8. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  9. Beyond CMB cosmic variance limits on reionization with the polarized Sunyaev-Zel'dovich effect

    Science.gov (United States)

    Meyers, Joel; Meerburg, P. Daniel; van Engelen, Alexander; Battaglia, Nicholas

    2018-05-01

    Upcoming cosmic microwave background (CMB) surveys will soon make the first detection of the polarized Sunyaev-Zel'dovich effect, the linear polarization generated by the scattering of CMB photons on the free electrons present in collapsed objects. Measurement of this polarization along with knowledge of the electron density of the objects allows a determination of the quadrupolar temperature anisotropy of the CMB as viewed from the space-time location of the objects. Maps of these remote temperature quadrupoles have several cosmological applications. Here we propose a new application: the reconstruction of the cosmological reionization history. We show that with quadrupole measurements out to redshift 3, constraints on the mean optical depth can be improved by an order of magnitude beyond the CMB cosmic variance limit.

  10. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  11. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  12. SIERRA Low Mach Module: Fuego Theory Manual Version 4.44

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-04-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  13. SIERRA Low Mach Module: Fuego Theory Manual Version 4.46.

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-09-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  14. SIERRA Low Mach Module: Fuego User Manual Version 4.44

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-04-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  15. SIERRA Low Mach Module: Fuego User Manual Version 4.46.

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-09-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  16. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  17. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave Boundary-Layer Interaction

    Science.gov (United States)

    Davis, David O.

    2015-01-01

    Preliminary results of an experimental investigation of a Mach 2.5 two-dimensional axisymmetric shock-wave/boundary-layer interaction (SWBLI) are presented. The purpose of the investigation is to create a SWBLI dataset specifically for CFD validation purposes. Presented herein are the details of the facility and preliminary measurements characterizing the facility and interaction region. The results will serve to define the region of interest where more detailed mean and turbulence measurements will be made.

  18. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    Science.gov (United States)

    Davis, David Owen

    2015-01-01

    Preliminary results of an experimental investigation of a Mach 2.5 two-dimensional axisymmetric shock-wave/ boundary-layer interaction (SWBLI) are presented. The purpose of the investigation is to create a SWBLI dataset specifically for CFD validation purposes. Presented herein are the details of the facility and preliminary measurements characterizing the facility and interaction region. These results will serve to define the region of interest where more detailed mean and turbulence measurements will be made.

  19. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  20. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  1. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  2. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  3. Homogeneity Study of UO2 Pellet Density for Quality Control

    International Nuclear Information System (INIS)

    Moon, Je Seon; Park, Chang Je; Kang, Kwon Ho; Moon, Heung Soo; Song, Kee Chan

    2005-01-01

    A homogeneity study has been performed with various densities of UO 2 pellets as the work of a quality control. The densities of the UO 2 pellets are distributed randomly due to several factors such as the milling conditions and sintering environments, etc. After sintering, total fourteen bottles were chosen for UO 2 density and each bottle had three samples. With these bottles, the between-bottle and within-bottle homogeneity were investigated via the analysis of the variance (ANOVA). From the results of ANOVA, the calculated F-value is used to determine whether the distribution is accepted or rejected from the view of a homogeneity under a certain confidence level. All the homogeneity checks followed the International Standard Guide 35

  4. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  5. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  6. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  7. The formation of cosmic structure in a texture-seeded cold dark matter cosmogony

    Science.gov (United States)

    Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III

    1992-01-01

    The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.

  8. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  9. The Experimental Measurement of Aerodynamic Heating About Complex Shapes at Supersonic Mach Numbers

    Science.gov (United States)

    Neumann, Richard D.; Freeman, Delma C.

    2011-01-01

    In 2008 a wind tunnel test program was implemented to update the experimental data available for predicting protuberance heating at supersonic Mach numbers. For this test the Langley Unitary Wind Tunnel was also used. The significant differences for this current test were the advances in the state-of-the-art in model design, fabrication techniques, instrumentation and data acquisition capabilities. This current paper provides a focused discussion of the results of an in depth analysis of unique measurements of recovery temperature obtained during the test.

  10. SPIROMETRIC EVALUATION OF LUNG FUNCTION OF COAL WORKERS WORKING AT MACH (BOLAN DISTRICT)

    OpenAIRE

    Ghulam Sarwar, Muhammad Younis, Shafi Muhammad, Tanzeel Ahmed*, Muhammad Siddique, Bashir Ahmed, Munir Ahmed, Jahanzaib

    2017-01-01

    To evaluate the coal dust effect on lung function among coal workers and non-coal workers. This was case-control study. The 144 male coal workers and non-coal workers, 20-50 years more than one year of working skill were selected. Study was carried out in the Mach, Bolan district in Balochistan, Pakistan. The Spirometer and selfdesigned survey form were used. The interview was accompanied and information was documented in the survey form and Spirometry was done for coal workers and non-coal w...

  11. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  12. Aerodynamic Characteristics of a Revised Target Drone Vehicle at Mach Numbers from 1.60 to 2.86

    Science.gov (United States)

    Blair, A. B., Jr.; Babb, C. Donald

    1968-01-01

    An investigation has been conducted in the Langley Unitary Plan wind tunnel to determine the aerodynamic characteristics of a revised target drone vehicle through a Mach number range from 1.60 to 2.86. The vehicle had canard surfaces and a swept clipped-delta wing with twin tip-mounted vertical tails.

  13. Genotype by environment interaction for adult body weights of shrimp Penaeus vannamei when grown at low and high densitie

    Directory of Open Access Journals (Sweden)

    Famula Thomas R

    2008-09-01

    Full Text Available Abstract Shrimp is one of few marine species cultured worldwide for which several selective breeding programs are being conducted. One environmental factor that can affect the response to selection in breeding programs is the density at which the shrimp are cultured (low-medium-high. Phenotypic plasticity in the growth response to different densities might be accompanied by a significant genotype by environment interaction, evidenced by a change in heritabilities between environments and by a genetic correlation less than one for a unique trait between environments. Our goal was to understand whether different growth densities affect estimates of those genetic parameters for adult body weight (BW in the Pacific white shrimp (Penaeus vannamei. BW heritabilities were significantly different between environments, with the largest at high density. These differences resulted from both an increased additive genetic variance and a decreased environmental variance when grown at high density. The genetic correlation between BWs at the two environmental conditions was significantly less than one. Whereas these results might be suggestive for carrying out shrimp selective breeding for BW under high density conditions, further understanding of genetic correlations between growth and reproductive traits within a given environment is necessary, as there are indications of reduced reproductive fitness for shrimp grown at high densities.

  14. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  15. Density functional theory investigation of two-dimensional dipolar fermions in a harmonic trap

    International Nuclear Information System (INIS)

    Ustunel, Hande; Abedinpour, Saeed H; Tanatar, B

    2014-01-01

    We investigate the behavior of polarized dipolar fermions in a two-dimensional harmonic trap in the framework of the density functional theory (DFT) formalism using the local density approximation. We treat only a few particles interacting moderately. Important results were deduced concerning key characteristics of the system such as total energy and particle density. Our results indicate that, at variance with Coulombic systems, the exchange- correlation component was found to provide a large contribution to the total energy for a large range of interaction strengths and particle numbers. In addition, the density profiles of the dipoles are shown to display important features around the origin that is not possible to capture by earlier, simpler treatments of such systems

  16. All-Optical Regenerative OTDM Add-Drop Multiplexing at 40 Gb/s using Monolithic InP Mach-Zehnder Interferometer

    DEFF Research Database (Denmark)

    Fischer, St.; Dülk, M.; Gamper, E.

    2000-01-01

    We present a novel method for all-optical add-drop multiplexing having regenerative capability for 40-Gb/s optical time-division multiplexed (OTDM) data using a semiconductor optical amplifier (SOA) based, monolithic Mach-Zehnder interferometer (MZI). Simultaneous dropping of one 10-Gb/s channel ...

  17. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  18. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  19. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  20. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  1. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  2. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  3. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  4. Nonsimilar Solution for Shock Waves in a Rotational Axisymmetric Perfect Gas with a Magnetic Field and Exponentially Varying Density

    Science.gov (United States)

    Nath, G.; Sinha, A. K.

    2017-01-01

    The propagation of a cylindrical shock wave in an ideal gas in the presence of a constant azimuthal magnetic field with consideration for the axisymmetric rotational effects is investigated. The ambient medium is assumed to have the radial, axial, and azimuthal velocity components. The fluid velocities and density of the ambient medium are assumed to vary according to an exponential law. Nonsimilar solutions are obtained by taking into account the vorticity vector and its components. The dependences of the characteristics of the problem on the Alfven-Mach number and time are obtained. It is shown that the presence of a magnetic field has a decaying effect on the shock wave. The pressure and density are shown to vanish at the inner surface (piston), and hence a vacuum forms at the line of symmetry.

  5. A Mathematical Framework for Critical Transitions: Normal Forms, Variance and Applications

    Science.gov (United States)

    Kuehn, Christian

    2013-06-01

    Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise.

  6. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  7. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  8. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  9. Computational Fluid Dynamics (CFD) Image of Hyper-X Research Vehicle at Mach 7 with Engine Operating

    Science.gov (United States)

    1997-01-01

    This computational fluid dynamics (CFD) image shows the Hyper-X vehicle at a Mach 7 test condition with the engine operating. The solution includes both internal (scramjet engine) and external flow fields, including the interaction between the engine exhaust and vehicle aerodynamics. The image illustrates surface heat transfer on the vehicle surface (red is highest heating) and flowfield contours at local Mach number. The last contour illustrates the engine exhaust plume shape. This solution approach is one method of predicting the vehicle performance, and the best method for determination of vehicle structural, pressure and thermal design loads. The Hyper-X program is an ambitious series of experimental flights to expand the boundaries of high-speed aeronautics and develop new technologies for space access. When the first of three aircraft flies, it will be the first time a non-rocket engine has powered a vehicle in flight at hypersonic speeds--speeds above Mach 5, equivalent to about one mile per second or approximately 3,600 miles per hour at sea level. Hyper-X, the flight vehicle for which is designated as X-43A, is an experimental flight-research program seeking to demonstrate airframe-integrated, 'air-breathing' engine technologies that promise to increase payload capacity for future vehicles, including hypersonic aircraft (faster than Mach 5) and reusable space launchers. This multiyear program is currently underway at NASA Dryden Flight Research Center, Edwards, California. The Hyper-X schedule calls for its first flight later this year (2000). Hyper-X is a joint program, with Dryden sharing responsibility with NASA's Langley Research Center, Hampton, Virginia. Dryden's primary role is to fly three unpiloted X-43A research vehicles to validate engine technologies and hypersonic design tools as well as the hypersonic test facility at Langley. Langley manages the program and leads the technology development effort. The Hyper-X Program seeks to significantly

  10. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  11. Comparison between Hydrogen, Methane and Ethylene Fuels in a 3-D Scramjet at Mach 8

    Science.gov (United States)

    2016-06-24

    scramjet using a cavity based flame holder in the T4 shock tunnel at The University of Queensland, as well as a companion fundamental CFD study. The...shock tunnel. 15. SUBJECT TERMS Airbreathing Engines, Hypersonics , Propulsion, AOARD 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...Report Comparison between hydrogen, methane and ethylene fuels in a 3-D Scramjet at Mach 8 Professor Michael K. Smart Chair of Hypersonic Propulsion

  12. Fast wave experiments in LAPD: RF sheaths, convective cells and density modifications

    Science.gov (United States)

    Carter, T. A.; van Compernolle, B.; Martin, M.; Gekelman, W.; Pribyl, P.; van Eester, D.; Crombe, K.; Perkins, R.; Lau, C.; Martin, E.; Caughman, J.; Tripathi, S. K. P.; Vincena, S.

    2017-10-01

    An overview is presented of recent work on ICRF physics at the Large Plasma Device (LAPD) at UCLA. The LAPD has typical plasma parameters ne 1012 -1013 cm-3, Te 1 - 10 eV and B 1000 G. A new high-power ( 150 kW) RF system and fast wave antenna have been developed for LAPD. The source runs at a frequency of 2.4 MHz, corresponding to 1 - 7fci , depending on plasma parameters. Evidence of rectified RF sheaths is seen in large increases ( 10Te) in the plasma potential on field lines connected to the antenna. The rectified potential scales linearly with antenna current. The rectified RF sheaths set up convective cells of local E × B flows, measured indirectly by potential measurements, and measured directly with Mach probes. At high antenna powers substantial modifications of the density profile were observed. The plasma density profile initially exhibits transient low frequency oscillations (10 kHz). The amplitude of the fast wave fields in the core plasma is modulated at the same low frequency, suggesting fast wave coupling is affected by the density rearrangement. Work performed at the Basic Plasma Science Facility, supported jointly by the National Science Foundation and the Department of Energy.

  13. All-optical signal regeneration at 40 Gbit/s using a Mach-Zehnder Interferometer based on semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Bischoff, Svend; Mørk, Jesper

    2000-01-01

    Summary form only given. All-optical signal regeneration and processing are interesting for high bit-rate transmission systems. The Mach-Zehnder interferometer (MZI) is a promising device for functionalities like all-optical add/drop and signal regeneration. Wavelength conversion up-to 20 Gbit...... and optimization issues....

  14. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  15. A combined volume-of-fluid method and low-Mach-number approach for DNS of evaporating droplets in turbulence

    Science.gov (United States)

    Dodd, Michael; Ferrante, Antonino

    2017-11-01

    Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.

  16. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  17. Femtosecond laser writing of a flat-top interleaver via cascaded Mach-Zehnder interferometers.

    Science.gov (United States)

    Ng, Jason C; Li, Chengbo; Herman, Peter R; Qian, Li

    2012-07-30

    A flat-top interleaver consisting of cascaded Mach-Zehnder interferometers (MZIs) was fabricated in bulk glass by femtosecond laser direct writing. Spectral contrast ratios of greater than 15 dB were demonstrated over a 30 nm bandwidth for 3 nm channel spacing. The observed spectral response agreed well with a standard transfer matrix model generated from responses of individual optical components, demonstrating the possibility for multi-component optical design as well as sufficient process accuracy and fabrication consistency for femtosecond laser writing of advanced optical circuits in three dimensions.

  18. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  19. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  20. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  1. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  2. Engineering method for aero-propulsive characteristics at hypersonic Mach numbers

    Science.gov (United States)

    Goradia, Suresh; Torres, Abel O.; Stack, Sharon H.; Everhart, Joel L.

    1991-01-01

    An engineering method has been developed for the rapid analysis of external aerodynamics and propulsive performance characteristics of airbreathing vehicles at hypersonic Mach numbers. This method, based on the theory of characteristics, has been developed to analyze fuselage-wing body combinations and body flaps with blunt or sharp leading/trailing edges. Arbitrary ratio of specific heat for the flowing medium can be specified in the program. Furthermore, the capability exists in the code to compute the inviscid inlet mass capture and momentum flux. The method is under development for computations of pressure distribution, and flow characteristics in the inlet, along with the effect of viscosity. Correlative studies have been performed for representative hypersonic configurations using the current method. The results of these correlations for various aerodynamics parameters are encouraging.

  3. Effects of positron density and temperature on large amplitude ion-acoustic waves in an electron-positron-ion plasma

    International Nuclear Information System (INIS)

    Nejoh, Y.N.

    1997-01-01

    The nonlinear wave structures of large amplitude ion-acoustic waves are studied in a plasma with positrons. We have presented the region of existence of the ion-acoustic waves by analysing the structure of the pseudopotential. The region of existence sensitively depends on the positron to electron density ratio, the ion to electron mass ratio and the positron to electron temperature ratio. It is shown that the maximum Mach number increases as the positron temperature increases and the region of existence of the ion-acoustic waves spreads as the positron temperature increases. 12 refs., 6 figs

  4. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  5. Nonlinear waves in electron–positron–ion plasmas including charge ...

    Indian Academy of Sciences (India)

    The effects of the driving electric field, ion temperature, positron density, ion drift, Mach number and propagation angle are investigated. It is shown that depending on the driving electric field, ion temperature, positron density, ion drift, Mach number and propagation angle, the numerical solutions exhibit waveforms that are ...

  6. Development of the GEM-MACH-FireWork System: An Air Quality Model with On-line Wildfire Emissions within the Canadian Operational Air Quality Forecast System

    Science.gov (United States)

    Pavlovic, Radenko; Chen, Jack; Beaulieu, Paul-Andre; Anselmp, David; Gravel, Sylvie; Moran, Mike; Menard, Sylvain; Davignon, Didier

    2014-05-01

    A wildfire emissions processing system has been developed to incorporate near-real-time emissions from wildfires and large prescribed burns into Environment Canada's real-time GEM-MACH air quality (AQ) forecast system. Since the GEM-MACH forecast domain covers Canada and most of the U.S.A., including Alaska, fire location information is needed for both of these large countries. During AQ model runs, emissions from individual fire sources are injected into elevated model layers based on plume-rise calculations and then transport and chemistry calculations are performed. This "on the fly" approach to the insertion of the fire emissions provides flexibility and efficiency since on-line meteorology is used and computational overhead in emissions pre-processing is reduced. GEM-MACH-FireWork, an experimental wildfire version of GEM-MACH, was run in real-time mode for the summers of 2012 and 2013 in parallel with the normal operational version. 48-hour forecasts were generated every 12 hours (at 00 and 12 UTC). Noticeable improvements in the AQ forecasts for PM2.5 were seen in numerous regions where fire activity was high. Case studies evaluating model performance for specific regions and computed objective scores will be included in this presentation. Using the lessons learned from the last two summers, Environment Canada will continue to work towards the goal of incorporating near-real-time intermittent wildfire emissions into the operational air quality forecast system.

  7. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  8. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  9. Mach-Zehnder atom interferometer inside an optical fiber

    Science.gov (United States)

    Xin, Mingjie; Leong, Wuiseng; Chen, Zilong; Lan, Shau-Yu

    2017-04-01

    Precision measurement with light-pulse grating atom interferometry in free space have been used in the study of fundamental physics and applications in inertial sensing. Recent development of photonic band-gap fibers allows light for traveling in hollow region while preserving its fundamental Gaussian mode. The fibers could provide a very promising platform to transfer cold atoms. Optically guided matter waves inside a hollow-core photonic band-gap fiber can mitigate diffraction limit problem and has the potential to bring research in the field of atomic sensing and precision measurement to the next level of compactness and accuracy. Here, we will show our experimental progress towards an atom interferometer in optical fibers. We designed an atom trapping scheme inside a hollow-core photonic band-gap fiber to create an optical guided matter waves system, and studied the coherence properties of Rubidium atoms in this optical guided system. We also demonstrate a Mach-Zehnder atom interferometer in the optical waveguide. This interferometer is promising for precision measurements and designs of mobile atomic sensors.

  10. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  11. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  12. Effects of the Mach number on the evolution of vortex-surface fields in compressible Taylor-Green flows

    Science.gov (United States)

    Peng, Naifu; Yang, Yue

    2018-01-01

    We investigate the evolution of vortex-surface fields (VSFs) in compressible Taylor-Green flows at Mach numbers (Ma) ranging from 0.5 to 2.0 using direct numerical simulation. The formulation of VSFs in incompressible flows is extended to compressible flows, and a mass-based renormalization of VSFs is used to facilitate characterizing the evolution of a particular vortex surface. The effects of the Mach number on the VSF evolution are different in three stages. In the early stage, the jumps of the compressive velocity component near shocklets generate sinks to contract surrounding vortex surfaces, which shrink vortex volume and distort vortex surfaces. The subsequent reconnection of vortex surfaces, quantified by the minimal distance between approaching vortex surfaces and the exchange of vorticity fluxes, occurs earlier and has a higher reconnection degree for larger Ma owing to the dilatational dissipation and shocklet-induced reconnection of vortex lines. In the late stage, the positive dissipation rate and negative pressure work accelerate the loss of kinetic energy and suppress vortex twisting with increasing Ma.

  13. Optical density measurements on the examination of colon cancer tissues

    International Nuclear Information System (INIS)

    Touati, E.; Ajaal, T.; Hamassi, A.

    2015-01-01

    Automated quantitative image analysis can aid in cancer diagnosis and, in general, mange medical treatments managements and improve routine medical diagnosis. Early diagnosis can make big difference between life and death. Microscopic images from two tissue types forty-four normal and fifty-eight cancers, was evaluated based on their ability to identify abnormalities in colon images. Optical density approach is applied to extract parameters that exhibit cancer behavior on colon tissues images. Using statistical toolbox, a significant result of (p<0.0001) for the mean and the variance of the optical density parameter were detected, and only (p<0.001) for skewness optical density. based on linear discrimination method, the obtained result shows 905 accuracy for both sensitivity and specificity, and with an overall accuracy of 90% (author)

  14. Terminal-shock and restart control of a Mach 2.5, axisymmetric, mixed compression inlet with 40 percent internal contraction. [wind tunnel tests

    Science.gov (United States)

    Baumbick, R. J.

    1974-01-01

    Results of experimental tests conducted on a supersonic, mixed-compression, axisymmetric inlet are presented. The inlet is designed for operation at Mach 2.5 with a turbofan engine (TF-30). The inlet was coupled to either a choked orifice plate or a long duct which had a variable-area choked exit plug. Closed-loop frequency responses of selected diffuser static pressures used in the terminal-shock control system are presented. Results are shown for Mach 2.5 conditions with the inlet coupled to either the choked orifice plate or the long duct. Inlet unstart-restart traces are also presented. High-response inlet bypass doors were used to generate an internal disturbance and also to achieve terminal-shock control.

  15. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  16. Convective and global stability analysis of a Mach 5.8 boundary layer grazing a compliant surface

    Science.gov (United States)

    Dettenrieder, Fabian; Bodony, Daniel

    2016-11-01

    Boundary layer transition on high-speed vehicles is expected to be affected by unsteady surface compliance. The stability properties of a Mach 5.8 zero-pressure-gradient laminar boundary layer grazing a nominally-flat thermo-mechanically compliant panel is considered. The linearized compressible Navier-Stokes equations describe small amplitude disturbances in the fluid while the panel deformations are described by the Kirchhoff-Love plate equation and its thermal state by the transient heat equation. Compatibility conditions that couple disturbances in the fluid to those in the solid yield simple algebraic and robin boundary conditions for the velocity and thermal states, respectively. A local convective stability analysis shows that the panel can modify both the first and second Mack modes when, for metallic-like panels, the panel thickness exceeds the lengthscale δ99 Rex- 0 . 5 . A global stability analysis, which permits finite panel lengths with clamped-clamped boundary conditions, shows a rich eigenvalue spectrum with several branches. Unstable modes are found with streamwise-growing panel deformations leading to Mach wave-type radiation. Stable global modes are also found and have distinctly different panel modes but similar radiation patterns. Air Force Office of Scientific Research.

  17. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  18. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  19. Ulnar variance: its relationship to ulnar foveal morphology and forearm kinematics.

    Science.gov (United States)

    Kataoka, Toshiyuki; Moritomo, Hisao; Omokawa, Shohei; Iida, Akio; Murase, Tsuyoshi; Sugamoto, Kazuomi

    2012-04-01

    It is unclear how individual differences in the anatomy of the distal ulna affect kinematics and pathology of the distal radioulnar joint. This study evaluated how ulnar variance relates to ulnar foveal morphology and the pronosupination axis of the forearm. We performed 3-dimensional computed tomography studies in vivo on 28 forearms in maximum supination and pronation to determine the anatomical center of the ulnar distal pole and the forearm pronosupination axis. We calculated the forearm pronosupination axis using a markerless bone registration technique, which determined the pronosupination center as the point where the axis emerges on the distal ulnar surface. We measured the depth of the anatomical center and classified it into 2 types: concave, with a depth of 0.8 mm or more, and flat, with a depth less than 0.8 mm. We examined whether ulnar variance correlated with foveal type and the distance between anatomical and pronosupination centers. A total of 18 cases had a concave-type fovea surrounded by the C-shaped articular facet of the distal pole, and 10 had a flat-type fovea with a flat surface without evident central depression. Ulnar variance of the flat type was 3.5 ± 1.2 mm, which was significantly greater than the 1.2 ± 1.1 mm of the concave type. Ulnar variance positively correlated with distance between the anatomical and pronosupination centers. Flat-type ulnar heads have a significantly greater ulnar variance than concave types. The pronosupination axis passes through the ulnar head more medially and farther from the anatomical center with increasing ulnar variance. This study suggests that ulnar variance is related in part to foveal morphology and pronosupination axis. This information provides a starting point for future studies investigating how foveal morphology relates to distal ulnar problems. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  20. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  1. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  2. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  3. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  4. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  5. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  6. Cusping, transport and variance of solutions to generalized Fokker-Planck equations

    Science.gov (United States)

    Carnaffan, Sean; Kawai, Reiichiro

    2017-06-01

    We study properties of solutions to generalized Fokker-Planck equations through the lens of the probability density functions of anomalous diffusion processes. In particular, we examine solutions in terms of their cusping, travelling wave behaviours, and variance, within the framework of stochastic representations of generalized Fokker-Planck equations. We give our analysis in the cases of anomalous diffusion driven by the inverses of the stable, tempered stable and gamma subordinators, demonstrating the impact of changing the distribution of waiting times in the underlying anomalous diffusion model. We also analyse the cases where the underlying anomalous diffusion contains a Lévy jump component in the parent process, and when a diffusion process is time changed by an uninverted Lévy subordinator. On the whole, we present a combination of four criteria which serve as a theoretical basis for model selection, statistical inference and predictions for physical experiments on anomalously diffusing systems. We discuss possible applications in physical experiments, including, with reference to specific examples, the potential for model misclassification and how combinations of our four criteria may be used to overcome this issue.

  7. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  8. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  9. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  10. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  11. The density compression ratio of shock fronts associated with coronal mass ejections

    Directory of Open Access Journals (Sweden)

    Kwon Ryun-Young

    2018-01-01

    Full Text Available We present a new method to extract the three-dimensional electron density profile and density compression ratio of shock fronts associated with coronal mass ejections (CMEs observed in white light coronagraph images. We demonstrate the method with two examples of fast halo CMEs (∼2000 km s−1 observed on 2011 March 7 and 2014 February 25. Our method uses the ellipsoid model to derive the three-dimensional geometry and kinematics of the fronts. The density profiles of the sheaths are modeled with double-Gaussian functions with four free parameters, and the electrons are distributed within thin shells behind the front. The modeled densities are integrated along the lines of sight to be compared with the observed brightness in COR2-A, and a χ2 approach is used to obtain the optimal parameters for the Gaussian profiles. The upstream densities are obtained from both the inversion of the brightness in a pre-event image and an empirical model. Then the density ratio and Alfvénic Mach number are derived. We find that the density compression peaks around the CME nose, and decreases at larger position angles. The behavior is consistent with a driven shock at the nose and a freely propagating shock wave at the CME flanks. Interestingly, we find that the supercritical region extends over a large area of the shock and lasts longer (several tens of minutes than past reports. It follows that CME shocks are capable of accelerating energetic particles in the corona over extended spatial and temporal scales and are likely responsible for the wide longitudinal distribution of these particles in the inner heliosphere. Our results also demonstrate the power of multi-viewpoint coronagraphic observations and forward modeling in remotely deriving key shock properties in an otherwise inaccessible regime.

  12. Mach-Zehnder interferometric photonic crystal fiber for low acoustic frequency detections

    Energy Technology Data Exchange (ETDEWEB)

    Pawar, Dnyandeo; Rao, Ch. N.; Kale, S. N., E-mail: sangeetakale2004@gmail.com [Department of Applied Physics, Defence Institute of Advanced Technology (DU), Girinagar, Pune 411 025, Maharashtra (India); Choubey, Ravi Kant [Department of Applied Physics, Amity Institute of Applied Sciences, Amity University, Noida 201 313 (India)

    2016-01-25

    Low frequency under-water acoustic signal detections are challenging, especially for marine applications. A Mach-Zehnder interferometric hydrophone is demonstrated using polarization-maintaining photonic-crystal-fiber (PM-PCF), spliced between two single-mode-fibers, operated at 1550 nm source. These data are compared with standard hydrophone, single-mode and multimode fiber. The PM-PCF sensor shows the highest response with a power shift (2.32 dBm) and a wavelength shift (392.8 pm) at 200 Hz. High birefringence values and the effect of the imparted acoustic pressure on this fiber, introducing the difference between the fast and slow axis changes, owing to the phase change in the propagation waves, demonstrate the strain-optic properties of the sensor.

  13. Sub-shot-noise phase sensitivity with a Bose-Einstein condensate Mach-Zehnder interferometer

    International Nuclear Information System (INIS)

    Pezze, L.; Smerzi, A.; Collins, L.A.; Berman, G.P.; Bishop, A.R.

    2005-01-01

    Bose-Einstein condensates (BEC), with their coherence properties, have attracted wide interest for their possible application to ultraprecise interferometry and ultraweak force sensors. Since condensates, unlike photons, are interacting, they may permit the realization of specific quantum states needed as input of an interferometer to approach the Heisenberg limit, the supposed lower bound to precision phase measurements. To this end, we study the sensitivity to external weak perturbations of a representative matter-wave Mach-Zehnder interferometer whose input are two Bose-Einstein condensates created by splitting a single condensate in two parts. The interferometric phase sensitivity depends on the specific quantum state created with the two condensates, and, therefore, on the time scale of the splitting process. We identify three different regimes, characterized by a phase sensitivity Δθ scaling with the total number of condensate particles N as (i) the standard quantum limit Δθ∼1/N 1/2 (ii) the sub shot-noise Δθ∼1/N 3/4 , and the (iii) the Heisenberg limit Δθ∼1/N. However, in a realistic dynamical BEC splitting, the 1/N limit requires a long adiabaticity time scale, which is hardly reachable experimentally. On the other hand, the sub-shot-noise sensitivity Δθ∼1/N 3/4 can be reached in a realistic experimental setting. We also show that the 1/N 3/4 scaling is a rigorous upper bound in the limit N→∞, while keeping constant all different parameters of the bosonic Mach-Zehnder interferometer

  14. High energy density Z-pinch plasmas using flow stabilization

    Energy Technology Data Exchange (ETDEWEB)

    Shumlak, U., E-mail: shumlak@uw.edu; Golingo, R. P., E-mail: shumlak@uw.edu; Nelson, B. A., E-mail: shumlak@uw.edu; Bowers, C. A., E-mail: shumlak@uw.edu; Doty, S. A., E-mail: shumlak@uw.edu; Forbes, E. G., E-mail: shumlak@uw.edu; Hughes, M. C., E-mail: shumlak@uw.edu; Kim, B., E-mail: shumlak@uw.edu; Knecht, S. D., E-mail: shumlak@uw.edu; Lambert, K. K., E-mail: shumlak@uw.edu; Lowrie, W., E-mail: shumlak@uw.edu; Ross, M. P., E-mail: shumlak@uw.edu; Weed, J. R., E-mail: shumlak@uw.edu [Aerospace and Energetics Research Program, University of Washington, Seattle, Washington, 98195-2250 (United States)

    2014-12-15

    The ZaP Flow Z-Pinch research project[1] at the University of Washington investigates the effect of sheared flows on MHD instabilities. Axially flowing Z-pinch plasmas are produced that are 100 cm long with a 1 cm radius. The plasma remains quiescent for many radial Alfvén times and axial flow times. The quiescent periods are characterized by low magnetic mode activity measured at several locations along the plasma column and by stationary visible plasma emission. Plasma evolution is modeled with high-resolution simulation codes – Mach2, WARPX, NIMROD, and HiFi. Plasma flow profiles are experimentally measured with a multi-chord ion Doppler spectrometer. A sheared flow profile is observed to be coincident with the quiescent period, and is consistent with classical plasma viscosity. Equilibrium is determined by diagnostic measurements: interferometry for density; spectroscopy for ion temperature, plasma flow, and density[2]; Thomson scattering for electron temperature; Zeeman splitting for internal magnetic field measurements[3]; and fast framing photography for global structure. Wall stabilization has been investigated computationally and experimentally by removing 70% of the surrounding conducting wall to demonstrate no change in stability behavior.[4] Experimental evidence suggests that the plasma lifetime is only limited by plasma supply and current waveform. The flow Z-pinch concept provides an approach to achieve high energy density plasmas,[5] which are large, easy to diagnose, and persist for extended durations. A new experiment, ZaP-HD, has been built to investigate this approach by separating the flow Z-pinch formation from the radial compression using a triaxial-electrode configuration. This innovation allows more detailed investigations of the sheared flow stabilizing effect, and it allows compression to much higher densities than previously achieved on ZaP by reducing the linear density and increasing the pinch current. Experimental results and

  15. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  16. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  17. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  18. Modeling and design of a spiral-shaped Mach-Zehnder interferometric sensor for refractive index sensing of watery solutions

    NARCIS (Netherlands)

    Hoekman, M.; Dijkstra, Marcel; Dijkstra, Mindert; Hoekstra, Hugo

    2006-01-01

    The modeling and design of a spiral-shaped Mach-Zehnder Interferometric sensor (sMZI sensor) for refractive index sensing of watery solutions is presented. The goal of the running project is to realise a multi-sensing array by placing multiple sMZIs in series to form a sensing branch, and to place

  19. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  20. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...