WorldWideScience

Sample records for volume averaging requires

  1. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  2. Volume calculation of the spur gear billet for cold precision forging with average circle method

    Institute of Scientific and Technical Information of China (English)

    Wangjun Cheng; Chengzhong Chi; Yongzhen Wang; Peng Lin; Wei Liang; Chen Li

    2014-01-01

    Forging spur gears are widely used in the driving system of mining machinery and equipment due to their higher strength and dimensional accuracy. For the purpose of precisely calculating the volume of cylindrical spur gear billet in cold precision forging, a new theoretical method named average circle method was put forward. With this method, a series of gear billet volumes were calculated. Comparing with the accurate three-dimensional modeling method, the accuracy of average circle method by theoretical calculation was estimated and the maximum relative error of average circle method was less than 1.5%, which was in good agreement with the experimental results. Relative errors of the calculated and the experimental for obtaining the gear billet volumes with reference circle method are larger than those of the average circle method. It shows that average circle method possesses a higher calculation accuracy than reference circle method (traditional method), which should be worth popularizing widely in calculation of spur gear billet volume.

  3. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  4. A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport

    Directory of Open Access Journals (Sweden)

    Gilberto Espinosa-Paredes

    2012-01-01

    Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.

  5. Average volume of the domain visited by randomly injected spherical Brownian particles in d dimensions

    Science.gov (United States)

    Berezhkovskii, Alexander M.; Weiss, George H.

    1996-07-01

    In order to extend the greatly simplified Smoluchowski model for chemical reaction rates it is necessary to incorporate many-body effects. A generalization with this feature is the so-called trapping model in which random walkers move among a uniformly distributed set of traps. The solution of this model requires consideration of the distinct number of sites visited by a single n-step random walk. A recent analysis [H. Larralde et al., Phys. Rev. A 45, 1728 (1992)] has considered a generalized version of this problem by calculating the average number of distinct sites visited by N n-step random walks. A related continuum analysis is given in [A. M. Berezhkovskii, J. Stat. Phys. 76, 1089 (1994)]. We consider a slightly different version of the general problem by calculating the average volume of the Wiener sausage generated by Brownian particles generated randomly in time. The analysis shows that two types of behavior are possible: one in which there is strong overlap between the Wiener sausages of the particles, and the second in which the particles are mainly independent of one another. Either one or both of these regimes occur, depending on the dimension.

  6. Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx

    2008-07-01

    This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)

  7. Lattice Boltzmann Model for The Volume-Averaged Navier-Stokes Equations

    CERN Document Server

    Zhang, Jingfeng; Ouyang, Jie

    2014-01-01

    A numerical method, based on discrete lattice Boltzmann equation, is presented for solving the volume-averaged Navier-Stokes equations. With a modified equilibrium distribution and an additional forcing term, the volume-averaged Navier-Stokes equations can be recovered from the lattice Boltzmann equation in the limit of small Mach number by the Chapman-Enskog analysis and Taylor expansion. Due to its advantages such as explicit solver and inherent parallelism, the method appears to be more competitive with traditional numerical techniques. Numerical simulations show that the proposed model can accurately reproduce both the linear and nonlinear drag effects of porosity in the fluid flow through porous media.

  8. The average free volume model for the ionic and simple liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 60 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. Some typical one atom liquids (molten metals and liquid noble gases) are introduced to verify this hypothesis. Good agreement between the theory prediction and experimental data can be obtained.

  9. Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers

    Science.gov (United States)

    Zhou, Feng; Catton, Ivan

    2012-10-01

    A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.

  10. Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare?

    KAUST Repository

    Davit, Yohan

    2013-12-01

    A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.

  11. Measurement of average density and relative volumes in a dispersed two-phase fluid

    Science.gov (United States)

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  12. Performance and production requirements for the optical components in a high-average-power laser system

    Energy Technology Data Exchange (ETDEWEB)

    Chow, R.; Doss, F.W.; Taylor, J.R.; Wong, J.N.

    1999-07-02

    Optical components needed for high-average-power lasers, such as those developed for Atomic Vapor Laser Isotope Separation (AVLIS), require high levels of performance and reliability. Over the past two decades, optical component requirements for this purpose have been optimized and performance and reliability have been demonstrated. Many of the optical components that are exposed to the high power laser light affect the quality of the beam as it is transported through the system. The specifications for these optics are described including a few parameters not previously reported and some component manufacturing and testing experience. Key words: High-average-power laser, coating efficiency, absorption, optical components

  13. CO2 column-averaged volume mixing ratio derived over Tsukuba from measurements by commercial airlines

    Directory of Open Access Journals (Sweden)

    H. Matsueda

    2010-02-01

    Full Text Available Column-averaged volume mixing ratios of carbon dioxide (XCO2 during the period from January 2007 to May 2008 over Tsukuba, Japan, were derived by using CO2 concentration data observed by Japan Airlines Corporation (JAL commercial airliners, based on the assumption that CO2 profiles over Tsukuba and Narita were the same. CO2 profile data for 493 flights on clear-sky days were analysed in order to calculate XCO2 with an ancillary dataset: Tsukuba observational data (by rawinsonde and a meteorological tower or global meteorological data (NCEP and CIRA-86. The amplitude of seasonal variation of XCO2 (Tsukuba observational from the Tsukuba observational data was determined by least-squares fit using a harmonic function to roughly evaluate the seasonal variation over Tsukuba. The highest and lowest values of the obtained fitted curve in 2007 for XCO2 (Tsukuba observational were 386.4 and 381.7 ppm in May and September, respectively. The dependence of XCO2 on the type of ancillary dataset was evaluated. The average difference between XCO2 (global from global climatological data and XCO2 (Tsukuba observational, i.e., the bias of XCO2 (global based on XCO2 (Tsukuba observational, was found to be -0.621 ppm with a standard deviation of 0.682 ppm. The uncertainty of XCO2 (global based on XCO2 (Tsukuba observational was estimated to be 0.922 ppm. This small uncertainty suggests that the present method of XCO2 calculation using data from airliners and global climatological data can be applied to the validation of GOSAT products for XCO2 over airports worldwide.

  14. Instantaneous equations for multiphase flow in porous media without length-scale restrictions using a non-local averaging volume

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Apartado Postal 55-535, Mexico D.F. 09340 (Mexico)

    2010-05-15

    The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.

  15. Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume

    Science.gov (United States)

    Berto, Filippo; Lazzarin, Paolo

    2014-01-01

    When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.

  16. Flight program language requirements. Volume 2: Requirements and evaluations

    Science.gov (United States)

    1972-01-01

    The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.

  17. Subsurface Contamination Focus Area technical requirements. Volume 1: Requirements summary

    Energy Technology Data Exchange (ETDEWEB)

    Nickelson, D.; Nonte, J.; Richardson, J.

    1996-10-01

    This document summarizes functions and requirements for remediation of source term and plume sites identified by the Subsurface Contamination Focus Area. Included are detailed requirements and supporting information for source term and plume containment, stabilization, retrieval, and selective retrieval remedial activities. This information will be useful both to the decision-makers within the Subsurface Contamination Focus Area (SCFA) and to the technology providers who are developing and demonstrating technologies and systems. Requirements are often expressed as graphs or charts, which reflect the site-specific nature of the functions that must be performed. Many of the tradeoff studies associated with cost savings are identified in the text.

  18. Analytical solutions for the coefficient of variation of the volume-averaged solute concentration in heterogeneous aquifers

    Science.gov (United States)

    Kabala, Z. J.

    1997-08-01

    Under the assumption that local solute dispersion is negligible, a new general formula (in the form of a convolution integral) is found for the arbitrary k-point ensemble moment of the local concentration of a solute convected in arbitrary m spatial dimensions with general sure initial conditions. From this general formula new closed-form solutions in m=2 spatial dimensions are derived for 2-point ensemble moments of the local solute concentration for the impulse (Dirac delta) and Gaussian initial conditions. When integrated over an averaging window, these solutions lead to new closed-form expressions for the first two ensemble moments of the volume-averaged solute concentration and to the corresponding concentration coefficients of variation (CV). Also, for the impulse (Dirac delta) solute concentration initial condition, the second ensemble moment of the solute point concentration in two spatial dimensions and the corresponding CV are demonstrated to be unbound. For impulse initial conditions the CVs for volume-averaged concentrations axe compared with each other for a tracer from the Borden aquifer experiment. The point-concentration CV is unacceptably large in the whole domain, implying that the ensemble mean concentration is inappropriate for predicting the actual concentration values. The volume-averaged concentration CV decreases significantly with an increasing averaging volume. Since local dispersion is neglected, the new solutions should be interpreted as upper limits for the yet to be derived solutions that account for local dispersion; and so should the presented CVs for Borden tracers. The new analytical solutions may be used to test the accuracy of Monte Carlo simulations or other numerical algorithms that deal with the stochastic solute transport. They may also be used to determine the size of the averaging volume needed to make a quasi-sure statement about the solute mass contained in it.

  19. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    Science.gov (United States)

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  20. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  1. Sub-volume averaging of repetitive structural features in angularly filtered electron tomographic reconstructions.

    Science.gov (United States)

    Kováčik, L; Kereïche, S; Matula, P; Raška, I

    2014-01-01

    Electron tomographic reconstructions suffer from a number of artefacts arising from effects accompanying the processes of acquisition of a set of tilted projections of the specimen in a transmission electron microscope and from its subsequent computational handling. The most pronounced artefacts usually come from imprecise projection alignment, distortion of specimens during tomogram acquisition and from the presence of a region of missing data in the Fourier space, the "missing wedge". The ray artefacts caused by the presence of the missing wedge can be attenuated by the angular image filter, which attenuates the transition between the data and the missing wedge regions. In this work, we present an analysis of the influence of angular filtering on the resolution of averaged repetitive structural motives extracted from three-dimensional reconstructions of tomograms acquired in the single-axis tilting geometry.

  2. Volume-Averaged Model of Inductively-Driven Multicusp Ion Source

    Science.gov (United States)

    Patel, Kedar K.; Lieberman, M. A.; Graf, M. A.

    1998-10-01

    A self-consistent spatially averaged model of high-density oxygen and boron trifluoride discharges has been developed for a 13.56 MHz, inductively coupled multicusp ion source. We determine positive ion, negative ion, and electron densities, the ground state and metastable densities, and the electron temperature as functions of the control parameters: gas pressure, gas flow rate, input power and reactor geometry. Neutralization and fragmentation into atomic species are assumed for all ions hitting the wall. For neutrals, a wall recombination coefficient for oxygen atoms and a wall sticking coefficient for boron trifluoride (BF_3) and its dissociation products are the single adjustable parameters used to model the surface chemistry. For the aluminum walls of the ion source used in the Eaton ULE2 ion implanter, complete wall recombination of O atoms is found to give the best match to the experimental data for oxygen, whereas a sticking coefficient of 0.62 for all neutral species in a BF3 discharge was found to best match experimental data.

  3. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  4. Positive outcome of average volume-assured pressure support mode of a Respironics V60 Ventilator in acute exacerbation of chronic obstructive pulmonary disease: a case report

    Directory of Open Access Journals (Sweden)

    Okuda Miyuki

    2012-09-01

    obstructive pulmonary disease was complicated by obstructive sleep apnea syndrome. Conclusion In cases such as this, in which patients with severe acute respiratory failure requiring full-time noninvasive positive pressure ventilation therapy also show sleep-disordered breathing, different ventilator settings must be used for waking and sleeping. On such occasions, the Respironics V60 Ventilator, which is equipped with an average volume-assured pressure support mode, may be useful in improving gas exchange and may achieve good patient compliance, because that mode allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range whenever ventilation falls below target levels.

  5. Bistability: requirements on cell-volume, protein diffusion, and thermodynamics.

    Directory of Open Access Journals (Sweden)

    Robert G Endres

    Full Text Available Bistability is considered wide-spread among bacteria and eukaryotic cells, useful, e.g., for enzyme induction, bet hedging, and epigenetic switching. However, this phenomenon has mostly been described with deterministic dynamic or well-mixed stochastic models. Here, we map known biological bistable systems onto the well-characterized biochemical Schlögl model, using analytical calculations and stochastic spatiotemporal simulations. In addition to network architecture and strong thermodynamic driving away from equilibrium, we show that bistability requires fine-tuning towards small cell volumes (or compartments and fast protein diffusion (well mixing. Bistability is thus fragile and hence may be restricted to small bacteria and eukaryotic nuclei, with switching triggered by volume changes during the cell cycle. For large volumes, single cells generally loose their ability for bistable switching and instead undergo a first-order phase transition.

  6. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes

    Directory of Open Access Journals (Sweden)

    Björn eNitzsche

    2015-06-01

    Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.

  7. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes.

    Science.gov (United States)

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H; Fonov, Vladimir S; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.

  8. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    Science.gov (United States)

    2010-03-19

    2 entitled Human Computer Interface ( HCI )Design Criteria Volume 1: User Interlace Requirements which contains the following major changes from...MISSILE SYSTEMS CENTER Air Force Space Command 483 N. Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space and...and efficient model of how the system works and can generalize this knowledge to other systems. According to Mayhew in Principles and Guidelines in

  9. Simulation of cooling channel rheocasting process of A356 aluminum alloy using three-phase volume averaging model

    Institute of Scientific and Technical Information of China (English)

    T. Wang; B.Pustal; M. Abondano; T. Grimmig; A. B(u)hrig-Polaczek; M. Wu; A. Ludwig

    2005-01-01

    The cooling channel process is a rehocasting method by which the prematerial with globular microstructure can be produced to fit the thixocasting process. A three-phase model based on volume averaging approach is proposed to simulate the cooling channel process of A356 Aluminum alloy. The three phases are liquid, solid and air respectively and treated as separated and interacting continua, sharing a single pressure field. The mass, momentum, enthalpy transport equations for each phase are solved. The developed model can predict the evolution of liquid, solid and air fraction as well as the distribution of grain density and grain size. The effect of pouring temperature on the grain density, grain size and solid fraction is analyzed in detail.

  10. The equivalence between volume averaging and method of planes definitions of the pressure tensor at a plane

    Science.gov (United States)

    Heyes, D. M.; Smith, E. R.; Dini, D.; Zaki, T. A.

    2011-07-01

    It is shown analytically that the method of planes (MOP) [Todd, Evans, and Daivis, Phys. Rev. E 52, 1627 (1995)] and volume averaging (VA) [Cormier, Rickman, and Delph, J. Appl. Phys. 89, 99 (2001), 10.1063/1.1328406] formulas for the local pressure tensor, Pα, y(y), where α ≡ x, y, or z, are mathematically identical. In the case of VA, the sampling volume is taken to be an infinitely thin parallelepiped, with an infinite lateral extent. This limit is shown to yield the MOP expression. The treatment is extended to include the condition of mechanical equilibrium resulting from an imposed force field. This analytical development is followed by numerical simulations. The equivalence of these two methods is demonstrated in the context of non-equilibrium molecular dynamics (NEMD) simulations of boundary-driven shear flow. A wall of tethered atoms is constrained to impose a normal load and a velocity profile on the entrained central layer. The VA formula can be used to compute all components of Pαβ(y), which offers an advantage in calculating, for example, Pxx(y) for nano-scale pressure-driven flows in the x-direction, where deviations from the classical Poiseuille flow solution can occur.

  11. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    Science.gov (United States)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  12. Investigation of the Solidification Behavior of NH4Cl Aqueous Solution Based on a Volume-Averaged Method

    Science.gov (United States)

    Li, Ri; Zhou, Liming; Wang, Jian; Li, Yan

    2017-02-01

    Based on solidification theory and a volume-averaged multiphase solidification model, the solidification process of NH4Cl-70 pct H2O was numerically simulated and experimentally verified. Although researchers have investigated the solidification process of NH4Cl-70 pct H2O, most existing studies have been focused on analysis of a single phenomenon, such as the formation of channel segregation, convection types, and the formation of grains. Based on prior studies, by combining numerical simulation and experimental investigation, all phenomena of the entire computational domain of the solidification process of an NH4Cl aqueous solution were comprehensively investigated for the first time in this study. In particular, the sedimentation of equiaxed grains in the ingot and the induced convection were reproduced. In addition, the formation mechanism of segregation was studied in depth. The calculation demonstrated that the equiaxed grains settled from the wall of the mold and gradually aggregated at the bottom of the mold; when the volume fraction reached a critical value, the columnar grains stopped growing, thus completing the columnar-to-equiaxed transition (CET). Because of solute partitioning, negative segregation occurred at the bottom region of the ingot concentrated with grains, whereas a wide range of positive segregation occurred in the unsolidified, upper part of the ingot. Experimental investigation indicated that the predicted results of the sedimentation of the equiaxed grains in the ingot and the convection types agreed well with the experimental results, thus revealing that the sedimentation of solid phase and convection in the solidification process are the key factors responsible for macrosegregation.

  13. Factors Impacting Habitable Volume Requirements: Results from the 2011 Habitable Volume Workshop

    Science.gov (United States)

    Simon, M.; Whitmire, A.; Otto, C.; Neubek, D. (Editor)

    2011-01-01

    This report documents the results of the Habitable Volume Workshop held April 18-21, 2011 in Houston, TX at the Center for Advanced Space Studies-Universities Space Research Association. The workshop was convened by NASA to examine the factors that feed into understanding minimum habitable volume requirements for long duration space missions. While there have been confinement studies and analogs that have provided the basis for the guidance found in current habitability standards, determining the adequacy of the volume for future long duration exploration missions is a more complicated endeavor. It was determined that an improved understanding of the relationship between behavioral and psychosocial stressors, available habitable and net habitable volume, and interior layouts was needed to judge the adequacy of long duration habitat designs. The workshop brought together a multi-disciplinary group of experts from the medical and behavioral sciences, spaceflight, human habitability disciplines and design professionals. These subject matter experts identified the most salient design-related stressors anticipated for a long duration exploration mission. The selected stressors were based on scientific evidence, as well as personal experiences from spaceflight and analogs. They were organized into eight major categories: allocation of space; workspace; general and individual control of environment; sensory deprivation; social monotony; crew composition; physical and medical issues; and contingency readiness. Mitigation strategies for the identified stressors and their subsequent impact to habitat design were identified. Recommendations for future research to address the stressors and mitigating design impacts are presented.

  14. Mask data volume: historical perspective and future requirements

    Science.gov (United States)

    Spence, Chris; Goad, Scott; Buck, Peter; Gladhill, Richard; Cinque, Russell; Preuninger, Jürgen; Griesinger, Üwe; Blöcker, Martin

    2006-06-01

    Mask data file sizes are increasing as we move from technology generation to generation. The historical 30% linear shrink every 2-3 years that has been called Moore's Law, has driven a doubling of the transistor budget and hence feature count. The transition from steppers to step-and-scan tools has increased the area of the mask that needs to be patterned. At the 130nm node and below, Optical Proximity Correction (OPC) has become prevalent, and the edge fragmentation required to implement OPC leads to an increase in the number of polygons required to define the layout. Furthermore, Resolution Enhancement Techniques (RETs) such as Sub-Resolution Assist Features (SRAFs) or tri-tone Phase Shift Masks (PSM) require additional features to be defined on the mask which do not resolve on the wafer, further increasing masks volumes. In this paper we review historical data on mask file sizes for microprocessor, DRAM and Flash memory designs. We consider the consequences of this increase in file size on Mask Data Prep (MDP) activities, both within the Integrated Device Manufacturer (IDM) and Mask Shop, namely: computer resources, storage and networks (for file transfer). The impact of larger file sizes on mask writing times is also reviewed. Finally we consider, based on the trends that have been observed over the last 5 technology nodes, what will be required to maintain reasonable MDP and mask manufacturing cycle times.

  15. MODIS. Volume 1: MODIS level 1A software baseline requirements

    Science.gov (United States)

    Masuoka, Edward; Fleig, Albert; Ardanuy, Philip; Goff, Thomas; Carpenter, Lloyd; Solomon, Carl; Storey, James

    1994-01-01

    This document describes the level 1A software requirements for the moderate resolution imaging spectroradiometer (MODIS) instrument. This includes internal and external requirements. Internal requirements include functional, operational, and data processing as well as performance, quality, safety, and security engineering requirements. External requirements include those imposed by data archive and distribution systems (DADS); scheduling, control, monitoring, and accounting (SCMA); product management (PM) system; MODIS log; and product generation system (PGS). Implementation constraints and requirements for adapting the software to the physical environment are also included.

  16. Is the Surface Potential Integral of a Dipole in a Volume Conductor Always Zero? A Cloud Over the Average Reference of EEG and ERP.

    Science.gov (United States)

    Yao, Dezhong

    2017-02-14

    Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.

  17. SU-D-213-04: Accounting for Volume Averaging and Material Composition Effects in An Ionization Chamber Array for Patient Specific QA

    Energy Technology Data Exchange (ETDEWEB)

    Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)

    2015-06-15

    Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly

  18. Identification of myocardial diffuse fibrosis by 11 heartbeat MOLLI T 1 mapping: averaging to improve precision and correlation with collagen volume fraction.

    Science.gov (United States)

    Vassiliou, Vassilios S; Wassilew, Katharina; Cameron, Donnie; Heng, Ee Ling; Nyktari, Evangelia; Asimakopoulos, George; de Souza, Anthony; Giri, Shivraman; Pierce, Iain; Jabbour, Andrew; Firmin, David; Frenneaux, Michael; Gatehouse, Peter; Pennell, Dudley J; Prasad, Sanjay K

    2017-06-12

    Our objectives involved identifying whether repeated averaging in basal and mid left ventricular myocardial levels improves precision and correlation with collagen volume fraction for 11 heartbeat MOLLI T 1 mapping versus assessment at a single ventricular level. For assessment of T 1 mapping precision, a cohort of 15 healthy volunteers underwent two CMR scans on separate days using an 11 heartbeat MOLLI with a 5(3)3 beat scheme to measure native T 1 and a 4(1)3(1)2 beat post-contrast scheme to measure post-contrast T 1, allowing calculation of partition coefficient and ECV. To assess correlation of T 1 mapping with collagen volume fraction, a separate cohort of ten aortic stenosis patients scheduled to undergo surgery underwent one CMR scan with this 11 heartbeat MOLLI scheme, followed by intraoperative tru-cut myocardial biopsy. Six models of myocardial diffuse fibrosis assessment were established with incremental inclusion of imaging by averaging of the basal and mid-myocardial left ventricular levels, and each model was assessed for precision and correlation with collagen volume fraction. A model using 11 heart beat MOLLI imaging of two basal and two mid ventricular level averaged T 1 maps provided improved precision (Intraclass correlation 0.93 vs 0.84) and correlation with histology (R (2) = 0.83 vs 0.36) for diffuse fibrosis compared to a single mid-ventricular level alone. ECV was more precise and correlated better than native T 1 mapping. T 1 mapping sequences with repeated averaging could be considered for applications of 11 heartbeat MOLLI, especially when small changes in native T 1/ECV might affect clinical management.

  19. Space transfer vehicle concepts and requirements, volume 2, book 1

    Science.gov (United States)

    1991-04-01

    The objective of the systems engineering task was to develop and implement an approach that would generate the required study products as defined by program directives. This product list included a set of system and subsystem requirements, a complete set of optimized trade studies and analyses resulting in a recommended system configuration, and the definition of an integrated system/technology and advanced development growth path. A primary ingredient in the approach was the TQM philosophy stressing job quality from the inception. Included throughout the Systems Engineering, Programmatics, Concepts, Flight Design, and Technology sections are data supporting the original objectives as well as supplemental information resulting from program activities. The primary result of the analyses and studies was the recommendation of a single propulsion stage Lunar Transportation System (LTS) configuration that supports several different operations scenarios with minor element changes. This concept has the potential to support two additional scenarios with complex element changes. The space based LTS concept consists of three primary configurations--Piloted, Reusable Cargo, and Expendable Cargo.

  20. Space transfer vehicle concepts and requirements. Volume 1: Executive summary

    Science.gov (United States)

    1991-04-01

    The objectives of the Space Transfer Vehicle (STV) Concepts and Requirements studies were to provide sensitivity data on usage, economics, and technology associated with new space transportation systems. The study was structured to utilize data on the emerging launch vehicles, the latest mission scenarios, and Space Exploration Initiative (SEI) payload manifesting and schedules, to define a flexible, high performance, cost effective, evolutionary space transportation system for NASA. Initial activities were to support the MSFC effort in the preparation of inputs to the 90 Day Report to the National Space Council (NSC). With the results of this study establishing a point-of-departure for continuing the STV studies in 1990, additional options and mission architectures were defined. The continuing studies will update and expand the parametrics, assess new cargo and manned ETO vehicles, determine impacts on the redefined Phase 0 Space Station Freedom, and to develop a design that encompasses adequate configuration flexibility to ensure compliance with on-going NASA study recommendations with major system disconnects. In terms of general requirements, the objectives of the STV system and its mission profiles will address crew safety and mission success through a failure-tolerant and forgiving design approach. These objectives were addressed through the following: engine-out capability for all mission phases; built-in-test for vehicle health monitoring to allow testing of all critical functions such as, verification of lunar landing and ascent engines before initiating the landing sequence; critical subsystems will have multiple strings for redundancy plus adequate supplies of onboard spares for removal and replacement of failed items; crew radiation protection; and trajectories that optimize lunar and Mars performance and flyby abort capabilities.

  1. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    Science.gov (United States)

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  2. Vascular refilling is independent of volume overload in hemodialysis with moderate ultrafiltration requirements.

    Science.gov (United States)

    Kron, Susanne; Schneditz, Daniel; Leimbach, Til; Aign, Sabine; Kron, Joachim

    2016-07-01

    Introduction Blood volume changes and vascular refilling during hemodialysis (HD) and ultrafiltration (UF) have been assumed to depend on volume overload (Vo ). It was the aim to study the magnitude of vascular refilling in stable HD patients with moderate volume expansion in everyday dialysis using novel technical approaches. Methods Patients were studied during routine dialysis and UF based on clinical dry weight assessment. Pre-dialysis Vo was independently measured by bioimpedance spectroscopy. Vascular refilling volume (Vref ) was calculated as: Vref  = Vuf  - ΔV, where ΔV is the absolute blood volume change determined by on-line dialysate dilution using a commercial on-line hemodiafiltration machine incorporating a relative blood volume monitor, and where Vuf is the prescribed UF volume. Findings Thirty patients (dry weight: 81.0 ± 17.8 kg) were studied. Pre-dialysis Vo was 2.46 ± 1.45 L. Vuf was 2.27 ± 0.71 L, specific UF rate was 6.45 ± 2.43 mL/kg/h, and since ΔV was 0.66 ± 0.31 L, Vref was determined as 1.61 ± 0.58 L, corresponding to a constant refilling fraction (Fref ) of 70.6 ± 10.6%. Vref strongly correlated with Vuf (r(2)  = 0.82) but was independent of Vo and other volumes. Fref was also independent of Vo and other volumes normalized for various measures of body size. Discussion While vascular refilling and Fref is independent of Vo in treatments with moderate UF requirements, intravascular volume depletion increases with increasing UF requirements. The relationship between blood volume and Vo needs to be more closely examined in further studies to optimize volume control in everyday dialysis.

  3. Improvement of internal tumor volumes of non-small cell lung cancer patients for radiation treatment planning using interpolated average CT in PET/CT.

    Directory of Open Access Journals (Sweden)

    Yao-Ching Wang

    Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.

  4. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  5. Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1

    Science.gov (United States)

    Dodson, D. B.

    1975-01-01

    The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.

  6. Design requirements for SRB production control system. Volume 2: System requirements and conceptual description

    Science.gov (United States)

    1981-01-01

    In the development of the business system for the SRB automated production control system, special attention had to be paid to the unique environment posed by the space shuttle. The issues posed by this environment, and the means by which they were addressed, are reviewed. The change in management philosphy which will be required as NASA switches from one-of-a-kind launches to multiple launches is discussed. The implications of the assembly process on the business system are described. These issues include multiple missions, multiple locations and facilities, maintenance and refurbishment, multiple sources, and multiple contractors. The implications of these aspects on the automated production control system are reviewed including an assessment of the six major subsystems, as well as four other subsystem. Some general system requirements which flow through the entire business system are described.

  7. PET imaging of thin objects: measuring the effects of positron range and partial-volume averaging in the leaf of Nicotiana tabacum

    Energy Technology Data Exchange (ETDEWEB)

    Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.

    2011-02-15

    Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.

  8. PET imaging of thin objects: measuring the effects of positron range and partial-volume averaging in the leag of Nicotiana Tabacum

    Energy Technology Data Exchange (ETDEWEB)

    Alexoff, D.L.; Alexoff, D.L.; Dewey, S.L.; Vaska, P.; Krishnamoorthy, S.; Ferrieri, R.; Schueller, M.; Schlyer, D.; Fowler, J.S.

    2011-03-01

    PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean {+-} S.D.) escaping the leaf parenchyma were measured to be 59 {+-} 1.1%, 64 {+-} 4.4% and 67 {+-} 1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.

  9. SU-C-304-01: Investigation of Various Detector Response Functions and Their Geometry Dependence in a Novel Method to Address Ion Chamber Volume Averaging Effect

    Energy Technology Data Exchange (ETDEWEB)

    Barraclough, B; Lebron, S [J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL (United States); Li, J; Fan, Qiyong; Liu, C; Yan, G [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2015-06-15

    Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and

  10. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    Science.gov (United States)

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.

  11. Preliminary validation of column-averaged volume mixing ratios of carbon dioxide and methane retrieved from GOSAT short-wavelength infrared spectra

    Directory of Open Access Journals (Sweden)

    I. Morino

    2010-12-01

    Full Text Available Column-averaged volume mixing ratios of carbon dioxide and methane retrieved from the Greenhouse gases Observing SATellite (GOSAT Short-Wavelength InfraRed observation (GOSAT SWIR XCO2 and XCH4 were compared with the reference data obtained by ground-based high-resolution Fourier Transform Spectrometers (g-b FTSs participating in the Total Carbon Column Observing Network (TCCON.

    Through calibrations of g-b FTSs with airborne in-situ measurements, the uncertainty of XCO2 and XCH4 associated with the g-b FTS was determined to be 0.8 ppm (~0.2% and 4 ppb (~0.2%, respectively. The GOSAT products are validated with these calibrated g-b FTS data. Preliminary results are as follows: The GOSAT SWIR XCO2 and XCH4 (Version 01.xx are biased low by 8.85 ± 4.75 ppm (2.3 ± 1.2% and 20.4 ± 18.9 ppb (1.2 ± 1.1%, respectively. The precision of the GOSAT SWIR XCO2 and XCH4 is considered to be about 1%. The latitudinal distributions of zonal means of the GOSAT SWIR XCO2 and XCH4 show similar features to those of the g-b FTS data.

  12. Synthesis from Design Requirements of a Hybrid System for Transport Aircraft Longitudinal Control. Volume 2

    Science.gov (United States)

    Hynes, Charles S.; Hardy, Gordon H.; Sherry, Lance

    2007-01-01

    Volume I of this report presents a new method for synthesizing hybrid systems directly from desi gn requirements, and applies the method to design of a hybrid system for longitudinal control of transport aircraft. The resulting system satisfies general requirement for safety and effectiveness specified a priori, enabling formal validation to be achieved. Volume II contains seven appendices intended to make the report accessible to readers with backgrounds in human factors, flight dynamics and control, and formal logic. Major design goals are (1) system design integrity based on proof of correctness at the design level, (2) significant simplification and cost reduction in system development and certification, and (3) improved operational efficiency, with significant alleviation of human-factors problems encountered by pilots in current transport aircraft. This report provides for the first time a firm technical basis for criteria governing design and certification of avionic systems for transport aircraft. It should be of primary interest to designers of next-generation avionic systems.

  13. MARA (Multimode Airborne Radar Altimeter) system documentation. Volume 1: MARA system requirements document

    Science.gov (United States)

    Parsons, C. L. (Editor)

    1989-01-01

    The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.

  14. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction.

  15. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 5) outlines the standards and requirements for the Fire Protection and Packaging and Transportation sections.

  16. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety.

  17. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations.

  18. Synthesis from Design Requirements of a Hybrid System for Transport Aircraft Longitudinal Control. Volume 1

    Science.gov (United States)

    Hynes, Charles S.; Hardy, Gordon H.; Sherry, Lance

    2007-01-01

    Volume I of this report presents a new method for synthesizing hybrid systems directly from design requirements, and applies the method to design of a hybrid system for longitudinal control of transport aircraft. The resulting system satisfies general requirement for safety and effectiveness specified a priori, enabling formal validation to be achieved. Volume II contains seven appendices intended to make the report accessible to readers with backgrounds in human factors, fli ght dynamics and control. and formal logic. Major design goals are (1) system desi g n integrity based on proof of correctness at the design level, (2), significant simplification and cost reduction in system development and certification, and (3) improved operational efficiency, with significant alleviation of human-factors problems encountered by pilots in current transport aircraft. This report provides for the first time a firm technical basis for criteria governing design and certification of avionic systems for transport aircraft. It should be of primary interest to designers of next-generation avionic systems.

  19. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    Science.gov (United States)

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; Pobese (body mass index [BMI] ≥30) men to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (Pphysical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  20. Southwest Project: resource/institutional requirements analysis. Volume III. Systems integration studies

    Energy Technology Data Exchange (ETDEWEB)

    Ormsby, L. S.; Sawyer, T. G.; Brown, Dr., M. L.; Daviet, II, L. L; Weber, E. R.; Brown, J. E.; Arlidge, J. W.; Novak, H. R.; Sanesi, Norman; Klaiman, H. C.; Spangenberg, Jr., D. T.; Groves, D. J.; Maddox, J. D.; Hayslip, R. M.; Ijams, G.; Lacy, R. G.; Montgomery, J.; Carito, J. A.; Ballance, J. W.; Bluemle, C. F.; Smith, D. N.; Wehrey, M. C.; Ladd, K. L.; Evans, Dr., S. K.; Guild, D. H.; Brodfeld, B.; Cleveland, J. A.; Hicks, K. L.; Noga, M. W.; Ross, A. M.

    1979-12-01

    The purpose of this project is to provide information to DOE which can be used to establish its plans for accelerated commercialization and market penetration of solar electric generating plants in the southwestern region of the United States. The area of interest includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The system integration study establishes the investment that utilities could afford to make in solar thermal, photovoltaic, and wind energy systems, and to assess the sensitivity of the break-even cost to critical variables including fuel escalation rates, fixed charge rates, load growth rates, cloud cover, number of sites, load shape, and energy storage. This information will be used as input to Volume IV, Institutional Studies, one objective of which will be to determine the incentives required to close the gap between the break-even investment for the utilities of the Southwest and the estimated cost of solar generation.

  1. Excluded volume effect of counterions and water dipoles near a highly charged surface due to a rotationally averaged Boltzmann factor for water dipoles.

    Science.gov (United States)

    Gongadze, Ekaterina; Iglič, Aleš

    2013-03-01

    Water ordering near a negatively charged electrode is one of the decisive factors determining the interactions of an electrode with the surrounding electrolyte solution or tissue. In this work, the generalized Langevin-Bikerman model (Gongadze-Iglič model) taking into account the cavity field and the excluded volume principle is used to calculate the space dependency of ions and water number densities in the vicinity of a highly charged surface. It is shown that for high enough surface charged densities the usual trend of increasing counterion number density towards the charged surface may be completely reversed, i.e. the drop in the counterions number density near the charged surface is predicted.

  2. Manned remote work station development article. Volume 1, book 1: Flight article requirements. Appendix A: Mission requirements

    Science.gov (United States)

    1979-01-01

    The requirements for several configurations of flight articles are presented. These requirements provide the basis to design manned remote work station development test articles and establish tests and simulation objectives for the resolution of development issues. Mission system and subsystem requirements for four MRWS configurations included: open cherry picker; closed cherry picker; crane turret; and free flyer.

  3. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Science.gov (United States)

    2010-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging... this section, the individual may also opt to pay part or all of the deferrrable tax under income...

  4. Influence of Hospital Volume Effects and Minimum Caseload Requirements on Quality of Care in Pancreatic Surgery in Germany.

    Science.gov (United States)

    Krautz, Christian; Denz, Axel; Weber, Georg F; Grützmann, Robert

    2017-05-01

    Numerous international studies have identified hospital volume as significant independent variable of death following pancreatic surgery. Most of these studies were limited to regions of countries or portions of a national population and did not include data on volume-outcome effects in Germany. The Medline database was systematically searched to identify studies that analyzed volume-outcome relationships and effects of minimum caseload requirements on outcomes of pancreatic surgery in Germany. Recent observational studies utilizing German hospital discharge data confirmed that patients undergoing pancreatic surgery in Germany also have better outcomes when treated in facilities with high annual caseloads. Besides a decreased risk of in-hospital mortality, there is also a reduced risk of 1-year mortality in high-volume hospitals. In addition, there is evidence that adherence to already existing minimum caseload requirements reduces morbidity and mortality of pancreatic surgery in Germany. As a result of an insufficient centralization in the recent past, however, a large proportion of hospitals that perform pancreatic surgery still do not meet minimum caseload requirements. Specific measures (i.e. sanctions for failure to achieve minimum volumes) that initiate a sufficient centralization process without threatening patient access to surgical care are needed.

  5. AN INVESTIGATION OF THE TRAINING AND SKILL REQUIREMENTS OF INDUSTRIAL MACHINERY MAINTENANCE WORKERS. VOLUME II. FINAL REPORT.

    Science.gov (United States)

    LYNN, FRANK

    THE APPENDIXES FOR "AN INVESTIGATION OF THE TRAINING AND SKILL REQUIREMENTS OF INDUSTRIAL MACHINERY MAINTENANCE WORKERS, FINAL REPORT, VOLUME I" (VT 004 006) INCLUDE (1) TWO LETTERS FROM PLANT ENGINEERS STRESSING THE IMPORTANCE OF TRAINING MACHINERY MAINTENANCE WORKERS, (2) A DESCRIPTION OF THE MAINTENANCE TRAINING SURVEY, A SAMPLE QUESTIONNAIRE,…

  6. Caltrans Average Annual Daily Traffic Volumes (2004)

    Data.gov (United States)

    California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...

  7. Application analysis of solar total energy systems to the residential sector. Volume II, energy requirements. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-07-01

    This project analyzed the application of solar total energy systems to appropriate segments of the residential sector and determined their market penetration potential. This volume covers the work done on energy requirements definition and includes the following: (1) identification of the single-family and multi-family market segments; (2) regionalization of the United States; (3) electrical and thermal load requirements, including time-dependent profiles; (4) effect of conservation measures on energy requirements; and (5) verification of simulated load data with real data.

  8. Evaluation of parameters of the HDV (V20 and dose average) in radiotherapy of lung cancer with lung volumes design adapted compounds (ITV); Evaluacion de parametros del HDV (V20 Y Dmed) en radioterapia adaptada de cancer de pulmon con diseno de volumenes pulmonares compuestos (ITV)

    Energy Technology Data Exchange (ETDEWEB)

    Monroy Anton, J. L.; Solar Tortosa, M.; Lopez Munoz, M.; Navarro Bergada, A.; Estornell Gualde, M. A.; Melchor Iniguez, M.

    2013-07-01

    Our objective was to evaluate the V20 parameters and dose average compared to a single lung volume designed with a CT study in normal breathing of the patient and the corresponding to a lung volume composed, designed from three studies of CT in different phases of the respiratory cycle. Check if there are important differences in these cases that determine the necessity of creating a composite lung volume to evaluate dose volume histogram. (Author)

  9. 40 CFR 80.1429 - Requirements for separating RINs from volumes of renewable fuel.

    Science.gov (United States)

    2010-07-01

    ... or biogas for which RINs have been generated in accordance with § 80.1426(f) must separate any RINs that have been assigned to that volume of renewable electricity or biogas if: (i) The party designates the electricity or biogas as transportation fuel; and (ii) The electricity or biogas is used...

  10. New Approach to Purging Monitoring Wells: Lower Flow Rates Reduce Required Purging Volumes and Sample Turbidity

    Science.gov (United States)

    It is generally accepted that monitoring wells must be purged to access formation water to obtain “representative” ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water...

  11. Geothermal power development in Hawaii. Volume II. Infrastructure and community-services requirements, Island of Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, G.A.; Buevens, W.R.

    1982-06-01

    The requirements of infrastructure and community services necessary to accommodate the development of geothermal energy on the Island of Hawaii for electricity production are identified. The following aspects are covered: Puna District-1981, labor resources, geothermal development scenarios, geothermal land use, the impact of geothermal development on Puna, labor resource requirments, and the requirements for government activity.

  12. Applicability of SREM to the Verification of Management Information System Software Requirements. Volume I.

    Science.gov (United States)

    1981-04-30

    approach during R ,ET development is required during the verification effort. The approach used for verifying the MOM 3FER to prepare for TD . X was to...the value resident in TRACK NR is equal to the value resident in TRACK NR ’N. The portion of the VMH requirement described above requires tnat the

  13. Gas content of a two-phase layer containing gas and a melt of K/sub 2/O-V/sub 2/O/sub 8/ averaged by volume and its foam formation in a reaction-regeneration cycle

    Energy Technology Data Exchange (ETDEWEB)

    Fazleev, M.P.; Chekhov, O.S.; Ermakov, E.A.

    1985-06-20

    This paper discusses the results of an investigation of the gas content averaged over the volume, hydrodynamic programs, and foaming in the K/sub 2/O-V/sub 2/O/sub 5/ melt plus gas system, which is used as a catalyst in several thermocatalytic processes. The experimental setup is described and a comparison of literature data on the gas content of different gas-liquid systems under comparable conditions is presented. The authors were able to determine the boundaries of the hydrodynamic modes in a bubbling reactor and derive equations for the calculation of the gas content. It was found that the gas content of the melt increased when V/sub 2/O/sub 5/ was reduced to V/sub 2/O/sub 4/ in the reaction portion of the reaction-regeneration cycle. Regeneration of the melt restores the value of gas content to its original level.

  14. Hypertonic/Hyperoncotic Resuscitation from Shock: Reduced Volume Requirement and Lower Intracranial Pressure

    Science.gov (United States)

    1989-10-01

    Volume 15, No. 4 ABSTRACTS OF PAPERS 433 INTRACRANIAL PRESSURE FOLLOWING RESUSCITATION FROM HEMORRHAGIC SHOCK John H. Whitley, Donald S. Prough, Michael ...SHOCK: COMPARISON OF FLUIDS John M. Whitley, PhD, Michael A. Olympio, MD, Donald S. Prough, MD Department of Anesthesia, Bowman Gray School of Medicine...fluid infused within the range of sodium and colloid concentrations examined in this study. In contrast, Gunnar et al.7’ 2 and Ducey et al.,8

  15. The normal increase in insulin after a meal may be required to prevent postprandial renal sodium and volume losses.

    Science.gov (United States)

    Irsik, Debra L; Blazer-Yost, Bonnie L; Staruschenko, Alexander; Brands, Michael W

    2017-06-01

    Despite the effects of insulinopenia in type 1 diabetes and evidence that insulin stimulates multiple renal sodium transporters, it is not known whether normal variation in plasma insulin regulates sodium homeostasis physiologically. This study tested whether the normal postprandial increase in plasma insulin significantly attenuates renal sodium and volume losses. Rats were instrumented with chronic artery and vein catheters, housed in metabolic cages, and connected to hydraulic swivels. Measurements of urine volume and sodium excretion (UNaV) over 24 h and the 4-h postprandial period were made in control (C) rats and insulin-clamped (IC) rats in which the postprandial increase in insulin was prevented. Twenty-four-hour urine volume (36 ± 3 vs. 15 ± 2 ml/day) and UNaV (3.0 ± 0.2 vs. 2.5 ± 0.2 mmol/day) were greater in the IC compared with C rats, respectively. Four hours after rats were given a gel meal, blood glucose and urine volume were greater in IC rats, but UNaV decreased. To simulate a meal while controlling blood glucose, C and IC rats received a glucose bolus that yielded peak increases in blood glucose that were not different between groups. Urine volume (9.7 ± 0.7 vs. 6.0 ± 0.8 ml/4 h) and UNaV (0.50 ± 0.08 vs. 0.20 ± 0.06 mmol/4 h) were greater in the IC vs. C rats, respectively, over the 4-h test. These data demonstrate that the normal increase in circulating insulin in response to hyperglycemia may be required to prevent excessive renal sodium and volume losses and suggest that insulin may be a physiological regulator of sodium balance. Copyright © 2017 the American Physiological Society.

  16. Design requirements for SRB production control system. Volume 1: Study background and overview

    Science.gov (United States)

    1981-01-01

    The solid rocket boosters assembly environment is described in terms of the contraints it places upon an automated production control system. The business system generated for the SRB assembly and the computer system which meets the business system requirements are described. The selection software process and modifications required to the recommended software are addressed as well as the hardware and configuration requirements necessary to support the system.

  17. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 7. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Burt, D.L.

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection.

  18. Transaction-based building controls framework, Volume 2: Platform descriptive model and requirements

    Energy Technology Data Exchange (ETDEWEB)

    Akyol, Bora A. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Haack, Jereme N. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Carpenter, Brandon J. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Hernandez, George [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)

    2015-07-31

    Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.

  19. RELAP5/MOD3 code manual: User`s guide and input requirements. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation.

  20. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    Science.gov (United States)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  1. Southwest Project: resource/institutional requirements analysis. Volume I. Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    1979-12-01

    This project provides information which could be used by DOE in formulating their plans for commercialization and market penetration of central station solar electric generating plants in the southwestern region of the United States. The area of interest includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The project evaluated the potential integration of central station solar electric generating facilities into the existing electric grids of the region through the year 2000 by making use of system planning methodology which is commonly used throughout the electric utility industry. The technologies included: wind energy conversion, solar thermal electric, solar photovoltaic conversion, and hybrid (solar thermal repowering) solar electric systems. The participants in this project included 12 electric utility companies and a state power authority in the southwestern United States as well as a major consulting engineering firm. A broad synopsis of information found in Volumes II, III, and IV is presented. (MCW)

  2. Study of power management technology for orbital multi-100KWe applications. Volume 3: Requirements

    Science.gov (United States)

    Mildice, J. W.

    1980-01-01

    Mid to late 1980's power management technology needs to support development of a general purpose space platform, capable of suplying 100 to 250 KWe to a variety of users in low Earth orbit are examined. A typical, shuttle assembled and supplied space platform is illustred, along with a group of payloads which might reasonably be expected to use such a facility. Examination of platform and user power needs yields a set of power requirements used to evaluate power management options for life cycle cost effectivness. The most cost effective ac/dc and dc systems are evaluated, specifically to develop system details which lead to technology goals, including: array and transmission voltages, best frequency for ac power transmission, and advantages and disadvantages of ac and dc systems for this application. System and component requirements are compared with the state-of-the-art to identify areas where technological development is required.

  3. Space station automation study: Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.

  4. 40 CFR 799.5085 - Chemical testing requirements for certain high production volume chemicals.

    Science.gov (United States)

    2010-07-01

    ... (preferred species), rat, or Chinese hamster): 40 CFR 799.9538 OR Mammalian Erythrocyte Micronucleus Test (in... CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS Multichemical Test Rules § 799.5085 Chemical testing... paragraph (j) of this section at any time from April 17, 2006 to the end of the test data...

  5. DDC 10 Year Requirements and Planning Study. Volume II. Technical Discussion, Bibliography, and Glossary

    Science.gov (United States)

    1976-06-12

    Technology of Information Procesoing (1978-1988) ........ .................. ... 39 2.2.3 Organizational Interface Between DDC and Other Information...Requirements and Planning Study: Expert Penal Review Report. December 31, 1975. (AUER-2325/2326-T-N5 AD-A022 303) TABLE 14 Evaluation of Technological

  6. Space station automation study. Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The two manufacturing concepts developed represent innovative, technologically advanced manufacturing schemes. The concepts were selected to facilitate an in depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, and artificial intelligence. While the cost effectiveness of these facilities has not been analyzed as part of this study, both appear entirely feasible for the year 2000 timeframe. The growing demand for high quality gallium arsenide microelectronics may warrant the ventures.

  7. Marksmanship Requirements From the Perspective of Combat Veterans - Volume I: Main Report

    Science.gov (United States)

    2016-02-01

    by Armor, Military Police, Quartermaster, Engineer and Field Artillery leaders. Three other skill areas, each accounting for 6% of less of the...Clusters of marksmanship skills were identified and linked to three groups of branches. Skills common to all branches were identified as well as... Skills identified reflected the leaders’ combat experience. Training of some high priority, common skills will require additional training time

  8. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  9. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  10. SOLERAS - Solar Controlled Environment Agriculture Project. Final report, Volume 5. Science Applications, Incorporated system requirements definition

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    This report sets forth the system requirements for a Solar Controlled-Environment Agriculture System (SCEAS) Project. In the report a conceptual baseline system description for an engineering test facility is given. This baseline system employs a fluid roof/roof filter in combination with a large storage tank and a ground water heat exchanger in order to provide cooling and heating as needed. Desalination is accomplished by pretreatment followed by reverse osmosis. Energy is provided by means of photovoltaics and wind machines in conjunction with storage batteries. Site and climatic data needed in the design process are given. System performance specifications and integrated system design criteria are set forth. Detailed subsystem design criteria are presented and appropriate references documented.

  11. Ocean thermal energy conversion (OTEC) platform configuration and integration. Final report. Volume I. Systems requirements and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-06-01

    Studies leading to the development of two 400 MW Offshore Thermal Energy Conversion Commercial Plants are presented. This volume includes a summary of three tasks: task IIA--systems evaluation and requirements; task IIB--evaluation plan; task III--technology review; and task IV--systems integration evaluation. Task IIA includes the definition of top level requirements and an assessment of factors critical to the selection of hull configuration and size, quantification of payload requirements and characteristics, and sensitivity of system characteristics to site selection. Task IIB includes development of a methodology for systematically evaluating the candidate hullforms, based on interrelationships and priorities developed during task IIA. Task III includes the assessment of current technology and identification of deficiencies in relation to OTEC requirements and the development of plans to correct such deficiencies. Task IV involves the formal evaluation of the six candidate hullforms in relation to sit and plant capacity to quantify cost/size/capability relationships, leading to selection of an optimum commercial plant. (WHK)

  12. A procedure to average 3D anatomical structures.

    Science.gov (United States)

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  13. Aggregation and Averaging.

    Science.gov (United States)

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  14. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  15. Your Average Nigga

    Science.gov (United States)

    Young, Vershawn Ashanti

    2004-01-01

    "Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…

  16. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  17. 77 FR 28281 - Withdrawal of Revocation of TSCA Section 4 Testing Requirements for One High Production Volume...

    Science.gov (United States)

    2012-05-14

    ... issue of March 16, 2012 (77 FR 15609) (FRL-9335-6). If you have questions regarding the applicability of... One High Production Volume Chemical Substance AGENCY: Environmental Protection Agency (EPA). ACTION... production volume chemicals (HPV1). * * * * * (j) * * * Table 2--Chemical Substances and Testing...

  18. Cálculo do volume de sangue necessário para a correção da anemia fetal em gestantes isoimunizadas Blood volume calculation required for the correction of fetal anemia in pregnant women with alloimmunization

    Directory of Open Access Journals (Sweden)

    Mônica Deolindo Santiago

    2008-04-01

    Full Text Available OBJETIVO: obter uma equação capaz de estimar o volume de concentrado de hemácias a ser infundido para correção da anemia em fetos de gestantes portadoras de isoimunização pelo fator Rh, baseado em parâmetros alcançados durante a cordocentese prévia à transfusão intra-uterina. MÉTODOS: em estudo transversal, foram analisadas 89 transfusões intra-uterinas para correção de anemia em 48 fetos acompanhados no Centro de Medicina Fetal do Hospital das Clínicas da Universidade Federal de Minas Gerais. A idade gestacional mediana, no momento da cordocentese, foi de 29 semanas e a média de procedimentos por feto foi de 2,1. A hemoglobina fetal foi dosada antes e após a cordocentese, sendo verificado o volume de concentrado de hemácias transfundido. Para determinação de uma fórmula para estimar o volume sanguíneo necessário para correção da anemia fetal, tomou-se como base o volume necessário para elevar em 1 g% a hemoglobina fetal (diferença entre a concentração de hemoglobina final e a inicial, dividida pelo volume transfundido e o volume de quanto seria necessário para se atingir 14 g%, em análise de regressão múltipla. RESULTADOS: a concentração da hemoglobina pré-transfusional variou entre 2,3 e 15,7 g%. A prevalência de anemia fetal (HbPURPOSE: to obtain an equation to estimate the volume of red blood cells concentrate to be infused to correct anemia in fetuses of pregnant women with Rh factor isoimmunization, based in parameters obtained along the cordocentesis previous to intrauterine transfusion. METHODS: a transversal study analyzing 89 intrauterine transfusions to correct anemia in 48 fetuses followed-up in the Centro de Medicina Fetal do Hospital das Clínicas da Universidade de Minas Gerais. The median gestational age at the cordocentesis was 29 weeks and the average number of procedures was 2.1. Fetal hemoglobin was assayed before and after cordocentesis, leading to the volume of transfused red blood

  19. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  20. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  1. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    Energy Technology Data Exchange (ETDEWEB)

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  2. Regulatory volume increase in astrocytes exposed to hypertonic medium requires β1 -adrenergic Na(+) /K(+) -ATPase stimulation and glycogenolysis.

    Science.gov (United States)

    Song, Dan; Xu, Junnan; Hertz, Leif; Peng, Liang

    2015-01-01

    The cotransporter of Na(+) , K(+) , 2Cl(-) , and water, NKKC1, is activated under two conditions in the brain, exposure to highly elevated extracellular K(+) concentrations, causing astrocytic swelling, and regulatory volume increase in cells shrunk in response to exposure to hypertonic medium. NKCC1-mediated transport occurs as secondary active transport driven by Na(+) /K(+) -ATPase activity, which establishes a favorable ratio for NKCC1 operation between extracellular and intracellular products of the concentrations of Na(+) , K(+) , and Cl(-) × Cl(-) . In the adult brain, astrocytes are the main target for NKCC1 stimulation, and their Na(+) /K(+) -ATPase activity is stimulated by elevated K(+) or the β-adrenergic agonist isoproterenol. Extracellular K(+) concentration is normal during regulatory volume increase, so this study investigated whether the volume increase occurred faster in the presence of isoproterenol. Measurement of cell volume via live cell microscopic imaging fluorescence to record fluorescence intensity of calcein showed that this was the case at isoproterenol concentrations of ≥1 µM in well-differentiated mouse astrocyte cultures incubated in isotonic medium with 100 mM sucrose added. This stimulation was abolished by the β1 -adrenergic antagonist betaxolol, but not by ICI118551, a β2 -adrenergic antagonist. A large part of the β1 -adrenergic signaling pathway in astrocytes is known. Inhibitors of this pathway as well as the glycogenolysis inhibitor 1,4-dideoxy-1,4-imino-D-arabinitol hydrochloride and the NKCC1 inhibitors bumetanide and furosemide abolished stimulation by isoproterenol, and it was weakened by the Na(+) /K(+) -ATPase inhibitor ouabain. These observations are of physiological relevance because extracellular hypertonicity occurs during intense neuronal activity. This might trigger a regulatory volume increase, associated with the post-excitatory undershoot.

  3. Project Columbiad: Mission to the Moon. Book 1: Executive Summary. Volume 1: Mission trade studies and requirements. Volume 2: Subsystem trade studies and selection

    Science.gov (United States)

    Clarke, Michael; Denecke, Johan; Garber, Suzanne; Kader, Beth; Liu, Celia; Weintraub, Ben; Cazeau, Patrick; Goetz, John; Haughwout, James; Larson, Erik

    1992-01-01

    In response to the Report of the Advisory Committee on the future of the U.S. Space Program and a request from NASA's Exploration Office, the MIT Hunsaker Aerospace Corporation (HAC) conducted a feasibility study, known as Project Columbiad, on reestablishing human presence on the Moon before the year 2000. The mission criteria established were to transport a four person crew to the lunar surface at any latitude and back to Earth with a 14-28 day stay on the lunar surface. Safety followed by cost of the Columbiad Mission were the top level priorities of HAC. The resulting design has a precursor mission that emplaces the required surface payloads before the piloted mission arrives. Both the precursor and piloted missions require two National Launch System (NLS) launches. Both the precursor and piloted mission have an Earth orbit rendezvous (EOR) with a direct transit to the Moon post-EOR. The piloted mission returns to Earth via a direct transit. Included among the surface payloads preemplaced are a habitat, solar power plant (including fuel cells for the lunar night), lunar rover, and mechanisms used to cover the habitat with regolith (lunar soil) in order to protect the crew members from severe solar flare radiation.

  4. Prediction of Long-term Post-operative Testosterone Replacement Requirement Based on the Pre-operative Tumor Volume and Testosterone Level in Pituitary Macroadenoma

    OpenAIRE

    Cheng-Chi Lee; Chung-Ming Chen; Shih-Tseng Lee; Kuo-Chen Wei; Ping-Ching Pai; Cheng-Hong Toh; Chi-Cheng Chuang

    2015-01-01

    Non-functioning pituitary macroadenomas (NFPAs) are the most prevalent pituitary macroadenomas. One common symptom of NFPA is hypogonadism, which may require long-term hormone replacement. This study was designed to clarify the association between the pre-operative tumor volume, pre-operative testosterone level, intraoperative resection status and the need of long-term post-operative testosterone replacement. Between 2004 and 2012, 45 male patients with NFPAs were enrolled in this prospective...

  5. Spreading of oil and the concept of average oil thickness

    Energy Technology Data Exchange (ETDEWEB)

    Goodman, R. [Innovative Ventures Ltd., Cochrane, AB (Canada); Quintero-Marmol, A.M. [Pemex E and P, Campeche (Mexico); Bannerman, K. [Radarsat International, Vancouver, BC (Canada); Stevenson, G. [Calgary Univ., AB (Canada)

    2004-07-01

    The area of on oil slick on water can be readily measured using simple techniques ranging from visual observations to satellite-based radar systems. However, it is necessary to know the volume of spilled oil in order to determine the environmental impacts and best response strategy. The volume of oil must be known to determine spill quantity, response effectiveness and weathering rates. The relationship between volume and area is the average thickness of the oil over the spill area. This paper presents the results of several experiments conducted in the Gulf of Mexico that determined if average thickness of the oil is a characteristic of a specific crude oil, independent of spill size. In order to calculate the amount of oil on water from the area of slick requires information on the oil thickness, the inhomogeneity of the oil thickness and the oil-to-water ratio in the slick if it is emulsified. Experimental data revealed that an oil slick stops spreading very quickly after the application of oil. After the equilibrium thickness has been established, the slick is very sensitive to disturbances on the water surface, such as wave action, which causes the oil circle to dissipate into several small irregular shapes. It was noted that the spill source and oceanographic conditions are both critical to the final shape of the spill. 31 refs., 2 tabs., 8 figs.

  6. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    2007-01-01

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ

  7. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  8. Prediction of Long-term Post-operative Testosterone Replacement Requirement Based on the Pre-operative Tumor Volume and Testosterone Level in Pituitary Macroadenoma.

    Science.gov (United States)

    Lee, Cheng-Chi; Chen, Chung-Ming; Lee, Shih-Tseng; Wei, Kuo-Chen; Pai, Ping-Ching; Toh, Cheng-Hong; Chuang, Chi-Cheng

    2015-11-05

    Non-functioning pituitary macroadenomas (NFPAs) are the most prevalent pituitary macroadenomas. One common symptom of NFPA is hypogonadism, which may require long-term hormone replacement. This study was designed to clarify the association between the pre-operative tumor volume, pre-operative testosterone level, intraoperative resection status and the need of long-term post-operative testosterone replacement. Between 2004 and 2012, 45 male patients with NFPAs were enrolled in this prospective study. All patients underwent transsphenoidal surgery. Hypogonadism was defined as total serum testosterone levels of testosterone to patients with defined hypogonadism or clinical symptoms of hypogonadism. Hormone replacement for longer than 1 year was considered as long-term therapy. The need for long-term post-operative testosterone replacement was significantly associated with larger pre-operative tumor volume (p = 0.0067), and lower pre-operative testosterone level (p = 0.0101). There was no significant difference between the gross total tumor resection and subtotal resection groups (p = 0.1059). The pre-operative tumor volume and testosterone level impact post-operative hypogonadism. By measuring the tumor volume and the testosterone level and by performing adequate tumor resection, surgeons will be able to predict post-operative hypogonadism and the need for long-term hormone replacement.

  9. C-Band Airport Surface Communications System Standards Development. Phase II Final Report. Volume 1: Concepts of Use, Initial System Requirements, Architecture, and AeroMACS Design Considerations

    Science.gov (United States)

    Hall, Edward; Isaacs, James; Henriksen, Steve; Zelkin, Natalie

    2011-01-01

    This report is provided as part of ITT s NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: New ATM Requirements-Future Communications, C-Band and L-Band Communications Standard Development and was based on direction provided by FAA project-level agreements for New ATM Requirements-Future Communications. Task 7 included two subtasks. Subtask 7-1 addressed C-band (5091- to 5150-MHz) airport surface data communications standards development, systems engineering, test bed and prototype development, and tests and demonstrations to establish operational capability for the Aeronautical Mobile Airport Communications System (AeroMACS). Subtask 7-2 focused on systems engineering and development support of the L-band digital aeronautical communications system (L-DACS). Subtask 7-1 consisted of two phases. Phase I included development of AeroMACS concepts of use, requirements, architecture, and initial high-level safety risk assessment. Phase II builds on Phase I results and is presented in two volumes. Volume I (this document) is devoted to concepts of use, system requirements, and architecture, including AeroMACS design considerations. Volume II describes an AeroMACS prototype evaluation and presents final AeroMACS recommendations. This report also describes airport categorization and channelization methodologies. The purposes of the airport categorization task were (1) to facilitate initial AeroMACS architecture designs and enable budgetary projections by creating a set of airport categories based on common airport characteristics and design objectives, and (2) to offer high-level guidance to potential AeroMACS technology and policy development sponsors and service providers. A channelization plan methodology was developed because a common global methodology is needed to assure seamless interoperability among diverse AeroMACS services potentially supplied by multiple service providers.

  10. Geosynchronous platform definition study. Volume 4, Part 2: Traffic analysis and system requirements for the new traffic model

    Science.gov (United States)

    1973-01-01

    A condensed summary of the traffic analyses and systems requirements for the new traffic model is presented. The results of each study activity are explained, key analyses are described, and important results are highlighted.

  11. Monitoring and control requirement definition study for dispersed storage and generation (DSG). Volume II. Final report, Appendix A: selected DSG technologies and their general control requirements

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    A major aim of the US National Energy Policy, as well as that of the New York State Energy Research and Development Authority, is to conserve energy and to shift from oil to more abundant domestic fuels and renewable energy sources. Dispersed Storage and Generation (DSG) is the term that characterizes the present and future dispersed, relatively small (<30 MW) energy systems, such as solar thermal electric, photovoltaic, wind, fuel cell, storage battery, hydro, and cogeneration, which can help achieve these national energy goals and can be dispersed throughout the distribution portion of an electric utility system. The purpose of this survey and identification of DSG technologies is to present an understanding of the special characteristics of each of these technologies in sufficient detail so that the physical principles of their operation and the internal control of each technology are evident. In this way, a better appreciation can be obtained of the monitoring and control requirements for these DSGs from a remote distribution dispatch center. A consistent approach is being sought for both hardware and software which will handle the monitoring and control necessary to integrate a number of different DSG technologies into a common distribution dispatch network. From this study it appears that the control of each of the DSG technologies is compatible with a supervisory control method of operation that lends itself to remote control from a distribution dispatch center.

  12. 77 FR 28340 - Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical Substance

    Science.gov (United States)

    2012-05-14

    ... acute toxicity, bacterial reverse mutation, and chromosomal damage for C.I. Pigment Blue 61 by removing...]phenyl]amino]- (CAS No. 1324-76-1), also known as C.I. Pigment Blue 61. EPA is basing its decision to... amendment revokes some of the testing requirements for C.I. Pigment Blue 61. EPA is basing its decision...

  13. 78 FR 27860 - Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical Substance

    Science.gov (United States)

    2013-05-13

    ... toxicity, mammalian acute toxicity, bacterial reverse mutation, and chromosomal damage for C.I. Pigment...), also known as C.I. Pigment Blue 61. After publication in the Federal Register of a final rule requiring testing for C.I. Pigment Blue 61, EPA received adequate, existing studies which eliminated the need...

  14. Development of flight experiment task requirements. Volume 2: Technical Report. Part 2: Appendix H: Tasks-skills data series

    Science.gov (United States)

    Hatterick, G. R.

    1972-01-01

    The data sheets presented contain the results of the task analysis portion of the study to identify skill requirements of space shuttle crew personnel. A comprehensive data base is provided of crew functions, operating environments, task dependencies, and task-skills applicable to a representative cross section of earth orbital research experiments.

  15. Geosynchronous platform definition study. Volume 4, Part 1: Traffic analysis and system requirements for the baseline traffic model

    Science.gov (United States)

    1973-01-01

    The traffic analyses and system requirements data generated in the study resulted in the development of two traffic models; the baseline traffic model and the new traffic model. The baseline traffic model provides traceability between the numbers and types of geosynchronous missions considered in the study and the entire spectrum of missions foreseen in the total national space program. The information presented pertaining to the baseline traffic model includes: (1) definition of the baseline traffic model, including identification of specific geosynchronous missions and their payload delivery schedules through 1990; (2) Satellite location criteria, including the resulting distribution of the satellite population; (3) Geosynchronous orbit saturation analyses, including the effects of satellite physical proximity and potential electromagnetic interference; and (4) Platform system requirements analyses, including satellite and mission equipment descriptions, the options and limitations in grouping satellites, and on-orbit servicing criteria (both remotely controlled and man-attended).

  16. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  17. Predictors and outcomes of lead extraction requiring a bailout femoral approach: Data from 2 high-volume centers.

    Science.gov (United States)

    El-Chami, Mikhael F; Merchant, Faisal M; Waheed, Anam; Khattak, Furqan; El-Khalil, Jad; Patel, Adarsh; Sayegh, Michael N; Desai, Yaanik; Leon, Angel R; Saba, Samir

    2017-04-01

    Lead extraction (LE) infrequently requires the use of the "bailout" femoral approach. Predictors and outcomes of femoral extraction are not well characterized. The aim of this study was to determine the predictors of need for femoral LE and its outcomes. Consecutive patients who underwent LE at our centers were identified. Baseline demographic characteristics, procedural outcomes, and clinical outcomes were ascertained by medical record review. Patients were stratified into 2 groups on the basis of the need for femoral extraction. A total of 1080 patients underwent LE, of whom 50 (4.63%) required femoral extraction. Patients requiring femoral extraction were more likely to have leads with longer dwell time (9.5 ± 6.0 years vs 5.7 ± 4.3 years; P extracted per procedure (2.0 ± 1.0 vs 1.7 ± 0.9; P = .003), and to have infection as an indication for extraction (72% vs 37.2%; P extraction group than in the nonfemoral group (58% and 76% vs 94.7% and 97.9 %, respectively; P extraction was needed in ~5% of LEs. Longer lead dwell time, higher number of leads extracted per procedure, and the presence of infection predicted the need for femoral extraction. Procedural success of femoral extraction was low, highlighting the fact that this approach is mostly used as a bailout strategy and thus selects for more challenging cases. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  18. The effects of ultrasound guidance and neurostimulation on the minimum effective anesthetic volume of mepivacaine 1.5% required to block the sciatic nerve using the subgluteal approach.

    Science.gov (United States)

    Danelli, Giorgio; Ghisi, Daniela; Fanelli, Andrea; Ortu, Andrea; Moschini, Elisa; Berti, Marco; Ziegler, Stefanie; Fanelli, Guido

    2009-11-01

    We tested the hypothesis that ultrasound (US) guidance may reduce the minimum effective anesthetic volume (MEAV(50)) of 1.5% mepivacaine required to block the sciatic nerve with a subgluteal approach compared with neurostimulation (NS). After premedication and single-injection femoral nerve block, 60 patients undergoing knee arthroscopy were randomly allocated to receive a sciatic nerve block with either NS (n = 30) or US (n = 30). In the US group, the sciatic nerve was localized between the ischial tuberosity and the greater trochanter. In the NS group, the appropriate muscular response (foot plantar flexion or inversion) was elicited (1.5 mA, 2 Hz, 0.1 ms) and maintained to mepivacaine required to block the sciatic nerve compared with NS.

  19. Physical Theories with Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...

  20. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin

  1. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  2. Rotational averaging of multiphoton absorption cross sections

    Science.gov (United States)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  3. Goal-Directed Fluid Therapy Using Stroke Volume Variation Does Not Result in Pulmonary Fluid Overload in Thoracic Surgery Requiring One-Lung Ventilation

    Directory of Open Access Journals (Sweden)

    Sebastian Haas

    2012-01-01

    Full Text Available Background. Goal-directed fluid therapy (GDT guided by functional parameters of preload, such as stroke volume variation (SVV, seems to optimize hemodynamics and possibly improves clinical outcome. However, this strategy is believed to be rather fluid aggressive, and, furthermore, during surgery requiring thoracotomy, the ability of SVV to predict volume responsiveness has raised some controversy. So far it is not known whether GDT is associated with pulmonary fluid overload and a deleterious reduction in pulmonary function in thoracic surgery requiring one-lung-ventilation (OLV. Therefore, we assessed the perioperative course of extravascular lung water index (EVLWI and paO2/FiO2-ratio during and after thoracic surgery requiring lateral thoracotomy and OLV to evaluate the hypothesis that fluid therapy guided by SVV results in pulmonary fluid overload. Methods. A total of 27 patients (group T were enrolled in this prospective study with 11 patients undergoing lung surgery (group L and 16 patients undergoing esophagectomy (group E. Goal-directed fluid management was guided by SVV (SVV 0.05 in EVLWI during the observation period (BL: 7.8 ± 2.5, 24postop: 8.1 ± 2.4 mL/kg. A subgroup analysis for group L and group E also did not reveal significant changes of EVLWI. The paO2/FiO2-ratio decreased significantly during the observation period (group L: BL: 462 ± 140, OLVterm15: 338 ± 112 mmHg; group E: BL: 389 ± 101, 24postop: 303 ± 74 mmHg but remained >300 mmHg except during OLV. Conclusions. SVV-guided fluid management in thoracic surgery requiring lateral thoracotomy and one-lung ventilation does not result in pulmonary fluid overload. Although oxygenation was reduced, pulmonary function remained within a clinically acceptable range.

  4. Wartime Requirements for Ammunition, Materiel and Personnel (WARRAMP). Volume III. Ammunition Post Processor User’s Manual (APP-UM).

    Science.gov (United States)

    1981-12-01

    preparation of the Distribution Requirement Report and the 3 - Day Report; these are produced and copied into the output file REPORTI by the program, and is...day in the study the number of samples or postures that are being played, and the AMMO (or DATA) expenditure file. The main output file ** REPORTI is...34 is to be used instead of the Program File name for the remainder of the processing; o The old output file REPORTI (56REPORT)is deleted o The input

  5. Sampling Based Average Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.

  6. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  7. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    Energy Technology Data Exchange (ETDEWEB)

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  8. Cargo Logistics Airlift Systems Study (CLASS). Volume 4: Future requirements of dedicated freighter aircraft to year 2008

    Science.gov (United States)

    Burby, R. J.

    1979-01-01

    The 1978 fleet operations are extended to the year 1992, thus providing an evaluation of current aircraft types in meeting the ensuing increased market demand. Possible changes in the fleet mix and the resulting economic situation are defined in terms of the number of units of each type aircraft and the resulting growth in operational frequency. Among the economic parameters considered are the associated investment required by the airline, the return on investment to the airline, and the accompanying levels of cash flow and operating income. Against this background the potential for a derivative aircraft to enter fleet operations in 1985 is defined as a function of payload size and as affected by 1980 technology. In a similar manner, the size and potential for a new dedicated 1990 technology, freighter aircraft to become operational in 1995 is established. The resulting aircraft and fleet operational and economic characteristics are evaluated over the period 1994 to 2008. The impacts of restricted growth in operational frequency, reduced market demand, variations in aircraft configurations, and military participation, are assessed.

  9. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  10. Quantized average consensus with delay

    NARCIS (Netherlands)

    Jafarian, Matin; De Persis, Claudio

    2012-01-01

    Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co

  11. Materials for high average power lasers

    Energy Technology Data Exchange (ETDEWEB)

    Marion, J.E.; Pertica, A.J.

    1989-01-01

    Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.

  12. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  13. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  14. Programmatic implications of implementing the relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements.

    Science.gov (United States)

    Cassim, Naseem; Smith, Honora; Coetzee, Lindi M; Glencross, Deborah K

    2017-01-01

    CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC) testing sites need investigation. We assessed the impact of relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory and POC testing sites. The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T). The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours). Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  15. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation.Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites.Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps.Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C.Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  16. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  17. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  18. Averaged Electroencephalic Audiometry in Infants

    Science.gov (United States)

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  19. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...

  20. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  1. Average Light Intensity Inside a Photobioreactor

    Directory of Open Access Journals (Sweden)

    Herby Jean

    2011-01-01

    Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.

  2. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  3. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  4. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  5. Dependability in Aggregation by Averaging

    CERN Document Server

    Jesus, Paulo; Almeida, Paulo Sérgio

    2010-01-01

    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...

  6. Measuring Complexity through Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2015-01-01

    This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...

  7. Mirror averaging with sparsity priors

    CERN Document Server

    Dalalyan, Arnak

    2010-01-01

    We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.

  8. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  9. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...: ER26FE07.012 Where: Bavg = Average benzene concentration for the applicable averaging period (volume...

  10. 基于体积平均法模拟铸锭凝固过程的可靠性分析%The reliability analysis of using the volume averaging metho d to simulate the solidification pro cess in a ingot

    Institute of Scientific and Technical Information of China (English)

    李日; 王健; 周黎明; 潘红

    2014-01-01

    采用欧拉方法和体积平均思想,建立了以液相为主相、等轴晶和柱状晶视为两类不同第二相的三相模型,耦合凝固过程质量、动量、能量、溶质的守恒方程和晶粒的传输方程.以Al-4.7 wt.%Cu二元合金铸锭为例,模拟了合金铸锭二维的流场、温度场、溶质场、柱状晶向等轴晶转变过程以及等轴晶的沉积过程,并将模拟的铸锭组织和偏析结果与实验所得结果对比.温度场、流场和组织的模拟结果与理论基本一致,但由于模型没有考虑收缩以及浇注时的强迫对流,导致铸锭外层的偏析模拟值比实测值低,内层的模拟值比实测值高.所以收缩和逆偏析在模拟中是不可忽略的,这也是本文模型的改进方向.另外在所得模拟结果的基础上分析了体积平均法计算铸锭凝固过程的优点和不足之处.%Adopting the Euler and the volume averaging methods, a three-phase mathematical model with parent melt as the primary phase, columnar dendrites and equiaxed grains as two different secondary phases is developed, and the coupled macroscopic mass, momentum, energy and species conservation equations are obtained separately. Taking the Al-4.7 wt%Cu binary alloy ingots for example, the flow field, temperature field, solute field, columnar-to-equiaxed-transition and grain sedimentation in two-dimension are simulated, and the simulated result of ingot and macrosegregation result are compared with their experimental values. The simulation results of temperature field, flow field and structure are basically consistent with the theoretical results, but the result of solute field shows that the simulated values is lower than the measured value on the edge, this is because the model does not take the shrinkage and forced convection into account, and the inner results is higher than the results on edge. The shrinkage and inverse segregation therefore should not be neglected. This model are still

  11. A database of age-appropriate average MRI templates.

    Science.gov (United States)

    Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze

    2016-01-01

    This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).

  12. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  13. Study of Manpower Requirements by Occupation for Alternative Technologies in the Energy-Related Industries, 1970-1990. Volumes I, IIA, and III.

    Science.gov (United States)

    Gutmanis, Ivars; And Others

    The report presents the methodology used by the National Planning Association (NPA), under contract to the Federal Energy Administration (FEA), to estimate direct labor usage coefficients in some sixty different occupational categories involved in construction, operation, and maintenance of energy facilities. Volume 1 presents direct labor usage…

  14. SAM: A Simple Averaging Model of Impression Formation

    Science.gov (United States)

    Lewis, Robert A.

    1976-01-01

    Describes the Simple Averaging Model (SAM) which was developed to demonstrate impression-formation computer modeling with less complex and less expensive procedures than are required by most established programs. (RC)

  15. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...

  16. Light shift averaging in paraffin-coated alkali vapor cells

    CERN Document Server

    Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry

    2015-01-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  17. Modification of averaging process in GR: Case study flat LTB

    CERN Document Server

    Khosravi, Shahram; Mansouri, Reza

    2007-01-01

    We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.

  18. State policies and requirements for management of uranium mining and milling in New Mexico. Volume V. State policy needs for community impact assistance

    Energy Technology Data Exchange (ETDEWEB)

    Vandevender, S.G.

    1980-04-01

    The report contained in this volume describes a program for management of the community impacts resulting from the growth of uranium mining and milling in New Mexico. The report, submitted to Sandia Laboratories by the New Mexico Department of Energy and Minerals, is reproduced without modification. The state recommends that federal funding and assistance be provided to implement a growth management program comprised of these seven components: (1) an early warning system, (2) a community planning and technical assistance capability, (3) flexible financing, (4) a growth monitoring system, (5) manpower training, (6) economic diversification planning, and (7) new technology testing.

  19. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  20. Volume Entropy

    CERN Document Server

    Astuti, Valerio; Rovelli, Carlo

    2016-01-01

    Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states --with finite-volume individual nodes-- describing a region with total volume smaller than $V$, has \\emph{finite} dimension, bounded by $V \\log V$. This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.

  1. Level sets of multiple ergodic averages

    CERN Document Server

    Ai-Hua, Fan; Ma, Ji-Hua

    2011-01-01

    We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.

  2. Report of the US Nuclear Regulatory Commission Piping Review Committee. Volume 2. Evaluation of seismic designs: a review of seismic design requirements for Nuclear Power Plant Piping

    Energy Technology Data Exchange (ETDEWEB)

    1985-04-01

    This document reports the position and recommendations of the NRC Piping Review Committee, Task Group on Seismic Design. The Task Group considered overlapping conservation in the various steps of seismic design, the effects of using two levels of earthquake as a design criterion, and current industry practices. Issues such as damping values, spectra modification, multiple response spectra methods, nozzle and support design, design margins, inelastic piping response, and the use of snubbers are addressed. Effects of current regulatory requirements for piping design are evaluated, and recommendations for immediate licensing action, changes in existing requirements, and research programs are presented. Additional background information and suggestions given by consultants are also presented.

  3. Report of the US Nuclear Regulatory Commission Piping Review Committee. Volume 2. Evaluation of seismic designs: a review of seismic design requirements for Nuclear Power Plant Piping

    Energy Technology Data Exchange (ETDEWEB)

    1985-04-01

    This document reports the position and recommendations of the NRC Piping Review Committee, Task Group on Seismic Design. The Task Group considered overlapping conservation in the various steps of seismic design, the effects of using two levels of earthquake as a design criterion, and current industry practices. Issues such as damping values, spectra modification, multiple response spectra methods, nozzle and support design, design margins, inelastic piping response, and the use of snubbers are addressed. Effects of current regulatory requirements for piping design are evaluated, and recommendations for immediate licensing action, changes in existing requirements, and research programs are presented. Additional background information and suggestions given by consultants are also presented.

  4. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  5. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  6. Average-Time Games on Timed Automata

    OpenAIRE

    Jurdzinski, Marcin; Trivedi, Ashutosh

    2009-01-01

    An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...

  7. Average-Case Analysis of Algorithms Using Kolmogorov Complexity

    Institute of Scientific and Technical Information of China (English)

    姜涛; 李明

    2000-01-01

    Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

  8. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  9. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  10. Power Extension Package (PEP) system definition extension, orbital service module systems analysis study. Volume 7: PEP logistics and training plan requirements

    Science.gov (United States)

    1979-01-01

    Recommendations for logistics activities and logistics planning are presented based on the assumption that a system prime contractor will perform logistics functions to support all program hardware and will implement a logistics system to include the planning and provision of products and services to assure cost effective coverage of the following: maintainability; maintenance; spares and supply support; fuels; pressurants and fluids; operations and maintenance documentation training; preservation, packaging and packing; transportation and handling; storage; and logistics management information reporting. The training courses, manpower, materials, and training aids required will be identified and implemented in a training program.

  11. WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES

    Institute of Scientific and Technical Information of China (English)

    刘永平; 许贵桥

    2003-01-01

    This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.

  12. Stochastic averaging of quasi-Hamiltonian systems

    Institute of Scientific and Technical Information of China (English)

    朱位秋

    1996-01-01

    A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.

  13. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  14. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  15. Average sampling theorems for shift invariant subspaces

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.

  16. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  17. Average excitation potentials of air and aluminium

    NARCIS (Netherlands)

    Bogaardt, M.; Koudijs, B.

    1951-01-01

    By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu

  18. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  19. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  20. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  1. Analogue Divider by Averaging a Triangular Wave

    Science.gov (United States)

    Selvam, Krishnagiri Chinnathambi

    2017-08-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  2. Survey mirrors and lenses and their required surface accuracy. Volume 1. Technical report. Final report for September 15, 1978-December 1, 1979

    Energy Technology Data Exchange (ETDEWEB)

    Beesing, M. E.; Buchholz, R. L.; Evans, R. A.; Jaminski, R. W.; Mathur, A. K.; Rausch, R. A.; Scarborough, S.; Smith, G. A.; Waldhauer, D. J.

    1980-01-01

    An investigation of the optical performance of a variety of concentrating solar collectors is reported. The study addresses two important issues: the accuracy of reflective or refractive surfaces required to achieve specified performance goals, and the effect of environmental exposure on the performance concentrators. To assess the importance of surface accuracy on optical performance, 11 tracking and nontracking concentrator designs were selected for detailed evaluation. Mathematical models were developed for each design and incorporated into a Monte Carlo ray trace computer program to carry out detailed calculations. Results for the 11 concentrators are presented in graphic form. The models and computer program are provided along with a user's manual. A survey data base was established on the effect of environmental exposure on the optical degradation of mirrors and lenses. Information on environmental and maintenance effects was found to be insufficient to permit specific recommendations for operating and maintenance procedures, but the available information is compiled and reported and does contain procedures that other workers have found useful.

  3. Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Diks, C.G.H.; Clark, M.

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t

  4. Average local values and local variances in quantum mechanics

    CERN Document Server

    Muga, J G; Sala, P R

    1998-01-01

    Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.

  5. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  6. 40 CFR 80.105 - Reporting requirements.

    Science.gov (United States)

    2010-07-01

    ... applicable olefin content standard under § 80.101(b)(1)(iii) in volume percent; (G) The average olefin content under § 80.101(g) in volume percent; (H) The difference between the applicable olefin content standard under § 80.101(b)(1)(iii) in volume percent and the average olefin content under paragraph...

  7. Cosmological Measures without Volume Weighting

    CERN Document Server

    Page, Don N

    2008-01-01

    Many cosmologists (myself included) have advocated volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence would be a loss of the argument for eternal inflation.

  8. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  9. FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW

    Institute of Scientific and Technical Information of China (English)

    Haiying WANG; Xinyu ZHANG; Guohua ZOU

    2009-01-01

    In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.

  10. Averaging of Backscatter Intensities in Compounds

    Science.gov (United States)

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  11. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  12. The Average Lower Connectivity of Graphs

    Directory of Open Access Journals (Sweden)

    Ersin Aslan

    2014-01-01

    Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.

  13. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  14. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...

  15. Discrete Averaging Relations for Micro to Macro Transition

    Science.gov (United States)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  16. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  17. Appeals Council Requests - Average Processing Time

    Data.gov (United States)

    Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...

  18. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  19. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  20. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  1. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. Average Vegetation Growth 1995 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  3. Average Vegetation Growth 2000 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  5. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  6. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  7. Average Vegetation Growth 1996 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  8. Average Vegetation Growth 2005 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  9. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  10. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  11. Spacetime Average Density (SAD) Cosmological Measures

    CERN Document Server

    Page, Don N

    2014-01-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

  12. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  13. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  14. Average Annual Precipitation (PRISM model) 1961 - 1990

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  15. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  16. Cosmic Inhomogeneities and the Average Cosmological Dynamics

    OpenAIRE

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...

  17. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  18. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  19. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  20. HAT AVERAGE MULTIRESOLUTION WITH ERROR CONTROL IN 2-D

    Institute of Scientific and Technical Information of China (English)

    Sergio Amat

    2004-01-01

    Multiresolution representations of data are a powerful tool in data compression. For a proper adaptation to the singularities, it is crucial to develop nonlinear methods which are not based on tensor product. The hat average framework permets develop adapted schemes for all types of singularities. In contrast with the wavelet framework these representations cannot be considered as a change of basis, and the stability theory requires different considerations. In this paper, non separable two-dimensional hat average multiresolution processing algorithms that ensure stability are introduced. Explicit error bounds are presented.

  1. Robust numerical methods for conservation laws using a biased averaging procedure

    Science.gov (United States)

    Choi, Hwajeong

    In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.

  2. Averaged controllability of parameter dependent conservative semigroups

    Science.gov (United States)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  3. Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average

    Data.gov (United States)

    U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...

  4. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  5. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  6. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  7. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  8. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  9. Aircraft Simulator Data Requirements Study. Volume II

    Science.gov (United States)

    1977-01-01

    23143 ( Wep ), "Data, Technical Aircraft; for the Design of Aviation Training Devices," was to be used as a guide for the preparation of the new standard. 2...made, displays, etc., utilizing the "hot mockup ." The really useful data can only result from flight tests and can be obtained at any time after tile... mockup " and the preliminary tactical tape used in the tests. It will represent the best system data that will generally be obtained. k The last data

  10. The modulated average structure of mullite.

    Science.gov (United States)

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  11. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  12. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  13. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  14. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  15. Model averaging and muddled multimodel inferences

    Science.gov (United States)

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  16. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.

  17. Parameterized Traveling Salesman Problem: Beating the Average

    NARCIS (Netherlands)

    Gutin, G.; Patel, V.

    2016-01-01

    In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked

  18. On averaging methods for partial differential equations

    NARCIS (Netherlands)

    Verhulst, F.

    2001-01-01

    The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which

  19. Discontinuities and hysteresis in quantized average consensus

    NARCIS (Netherlands)

    Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo

    2011-01-01

    We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering

  20. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...

  1. A Functional Measurement Study on Averaging Numerosity

    Science.gov (United States)

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  2. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...

  3. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  4. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  5. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  6. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  7. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  8. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  9. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  10. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...

  11. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  12. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    C. Chiarella; X.Z. He; C.H. Hommes

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use

  13. Distribution of population averaged observables in stochastic gene expression

    Science.gov (United States)

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-03-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population averaged gene expression levels as a function of population, or sample size for several stochastic gene expression models to find out to what extent population averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function (pgf) of observables and Large Deviation Theory, we calculate the distribution of population averaged mRNA and protein levels as a function of model parameters and population size. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences for different models. We calculate the skewness and coefficient of variance for pgfs to estimate the sample size required for population average that contains information about gene expression models. This is relevant to experiments where a large number of data points are unavailable.

  14. Averaging cross section data so we can fit it

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D. [Brookhaven National Lab. (BNL), Upton, NY (United States). NNDC

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  15. Reachable volume RRT

    KAUST Repository

    McMahon, Troy

    2015-05-01

    © 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.

  16. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  17. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  18. Time-average dynamic speckle interferometry

    Science.gov (United States)

    Vladimirov, A. P.

    2014-05-01

    For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

  19. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  20. The Ghirlanda-Guerra identities without averaging

    CERN Document Server

    Chatterjee, Sourav

    2009-01-01

    The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.

  1. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  2. Condition-dependent cell volume and concentration of Escherichia coli to facilitate data conversion for systems biology modeling

    NARCIS (Netherlands)

    Volkmer, Benjamin; Heinemann, Matthias

    2011-01-01

    Systems biology modeling typically requires quantitative experimental data such as intracellular concentrations or copy numbers per cell. In order to convert population-averaging omics measurement data to intracellular concentrations or cellular copy numbers, the total cell volume and number of cell

  3. Geomagnetic effects on the average surface temperature

    Science.gov (United States)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  4. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  5. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  6. A simple algorithm for averaging spike trains.

    Science.gov (United States)

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  7. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  8. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  9. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.

  10. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  11. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  12. Exponential reduction of finite volume effects with twisted boundary conditions

    CERN Document Server

    Cherman, Aleksey; Wagman, Michael L; Yaffe, Laurence G

    2016-01-01

    Flavor-twisted boundary conditions can be used for exponential reduction of finite volume artifacts in flavor-averaged observables in lattice QCD calculations with $SU(N_f)$ light quark flavor symmetry. Finite volume artifact reduction arises from destructive interference effects in a manner closely related to the phase averaging which leads to large $N_c$ volume independence. With a particular choice of flavor-twisted boundary conditions, finite volume artifacts for flavor-singlet observables in a hypercubic spacetime volume are reduced to the size of finite volume artifacts in a spacetime volume with periodic boundary conditions that is four times larger.

  13. 3D MRI volume sizing of knee meniscus cartilage.

    Science.gov (United States)

    Stone, K R; Stoller, D W; Irving, S G; Elmquist, C; Gildengorin, G

    1994-12-01

    Meniscal replacement by allograft and meniscal regeneration through collagen meniscal scaffolds have been recently reported. To evaluate the effectiveness of a replaced or regrown meniscal cartilage, a method for measuring the size and function of the regenerated tissue in vivo is required. To solve this problem, we developed and evaluated a magnetic resonance imaging (MRI) technique to measure the volume of meniscal tissues. Twenty-one intact fresh cadaver knees were evaluated and scanned with MRI for meniscal volume sizing. The sizing sequence was repeated six times for each of 21 lateral and 12 medial menisci. The menisci were then excised and measured by water volume displacement. Each volume displacement measurement was repeated six times. The MRI technique employed to measure the volume of the menisci was shown to correspond to that of the standard measure of volume and was just as precise. However, the MRI technique consistently underestimated the actual volume. The average of the coefficient of variation for lateral volumes was 0.04 and 0.05 for the water and the MRI measurements, respectively. For medial measurements it was 0.04 and 0.06. The correlation for the lateral menisci was r = 0.45 (p = 0.04) and for the medial menisci it was r = 0.57 (p = 0.05). We conclude that 3D MRI is precise and repeatable but not accurate when used to measure meniscal volume in vivo and therefore may only be useful for evaluating changes in meniscal allografts and meniscal regeneration templates over time.

  14. Renormalized Volume

    Science.gov (United States)

    Gover, A. Rod; Waldron, Andrew

    2017-09-01

    We develop a universal distributional calculus for regulated volumes of metrics that are suitably singular along hypersurfaces. When the hypersurface is a conformal infinity we give simple integrated distribution expressions for the divergences and anomaly of the regulated volume functional valid for any choice of regulator. For closed hypersurfaces or conformally compact geometries, methods from a previously developed boundary calculus for conformally compact manifolds can be applied to give explicit holographic formulæ for the divergences and anomaly expressed as hypersurface integrals over local quantities (the method also extends to non-closed hypersurfaces). The resulting anomaly does not depend on any particular choice of regulator, while the regulator dependence of the divergences is precisely captured by these formulæ. Conformal hypersurface invariants can be studied by demanding that the singular metric obey, smoothly and formally to a suitable order, a Yamabe type problem with boundary data along the conformal infinity. We prove that the volume anomaly for these singular Yamabe solutions is a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. Recently, Graham proved that the first variation of the volume anomaly recovers the density obstructing smooth solutions to this singular Yamabe problem; we give a new proof of this result employing our boundary calculus. Physical applications of our results include studies of quantum corrections to entanglement entropies.

  15. Bayesian Model Averaging and Weighted Average Least Squares : Equivariance, Stability, and Numerical Issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa

  16. A sixth order averaged vector field method

    OpenAIRE

    Li, Haochen; Wang, Yushun; Qin, Mengzhao

    2014-01-01

    In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...

  17. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  18. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  19. Fluctuations of wavefunctions about their classical average

    Energy Technology Data Exchange (ETDEWEB)

    Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)

    2003-02-07

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  20. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  1. Grassmann Averages for Scalable Robust PCA

    OpenAIRE

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...

  2. Source of non-arrhenius average relaxation time in glass-forming liquids

    DEFF Research Database (Denmark)

    Dyre, Jeppe

    1998-01-01

    A major mystery of glass-forming liquids is the non-Arrhenius temperature-dependence of the average relaxation time. This paper briefly reviews the classical phenomenological models for non-Arrhenius behavior – the free volume model and the entropy model – and critiques against these models. We...... are anharmonic, the non-Arrhenius temperature-dependence of the average relaxation time is a consequence of the fact that the instantaneous shear modulus increases upon cooling....

  3. Detrending moving average algorithm for multifractals

    Science.gov (United States)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  4. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  5. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  6. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  7. Intensity contrast of the average supergranule

    CERN Document Server

    Langfellner, J; Gizon, L

    2016-01-01

    While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...

  8. Local average height distribution of fluctuating interfaces

    Science.gov (United States)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  9. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  10. Homodyne measurement of average photon number

    CERN Document Server

    Webb, J G; Huntington, E H

    2005-01-01

    We describe a new scheme for the measurement of mean photon flux at an arbitrary optical sideband frequency using homodyne detection. Experimental implementation of the technique requires an AOM in addition to the homodyne detector, and does not require phase locking. The technique exhibits polarisation, frequency and spatial mode selectivity, as well as much improved speed, resolution and dynamic range when compared to linear photodetectors and avalanche photo diodes (APDs), with potential application to quantum state tomography and information encoding using an optical frequency basis. Experimental data also directly confirms the Quantum Mechanical description of vacuum noise.

  11. Scaling registration of multiview range scans via motion averaging

    Science.gov (United States)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.

  12. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  13. Averaged Null Energy Condition from Causality

    CERN Document Server

    Hartman, Thomas; Tajdini, Amirhossein

    2016-01-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...

  14. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  15. Geographic Gossip: Efficient Averaging for Sensor Networks

    CERN Document Server

    Dimakis, Alexandros G; Wainwright, Martin J

    2007-01-01

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...

  16. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  17. Recent advances in phase shifted time averaging and stroboscopic interferometry

    Science.gov (United States)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  18. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)

    2012-02-15

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  19. Spatial averaging-effects on turbulence measured by a continuous-wave coherent lidar

    DEFF Research Database (Denmark)

    Sjöholm, Mikael; Mikkelsen, Torben; Mann, Jakob;

    2009-01-01

    The influence of spatial volume averaging of a focused continuous-wave coherent Doppler lidar on observed wind turbulence in the atmospheric surface layer is described and analysed. For the first time, comparisons of lidar-measured turbulent spectra with spectra simultaneously obtained from a mast...

  20. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You;

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...

  1. Comparison of Statistically Modeled Contaminated Soil Volume Estimates and Actual Excavation Volumes at the Maywood FUSRAP Site - 13555

    Energy Technology Data Exchange (ETDEWEB)

    Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States)

    2013-07-01

    As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)

  2. Technical support for the Ohio Clean Coal Technology Program. Volume 2, Baseline of knowledge concerning process modification opportunities, research needs, by-product market potential, and regulatory requirements: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Olfenbuttel, R.; Clark, S.; Helper, E.; Hinchee, R.; Kuntz, C.; Means, J.; Oxley, J.; Paisley, M.; Rogers, C.; Sheppard, W.; Smolak, L. [Battelle, Columbus, OH (United States)

    1989-08-28

    This report was prepared for the Ohio Coal Development Office (OCDO) under Grant Agreement No. CDO/R-88-LR1 and comprises two volumes. Volume 1 presents data on the chemical, physical, and leaching characteristics of by-products from a wide variety of clean coal combustion processes. Volume 2 consists of a discussion of (a) process modification waste minimization opportunities and stabilization considerations; (b) research and development needs and issues relating to clean coal combustion technologies and by-products; (c) the market potential for reusing or recycling by-product materials; and (d) regulatory considerations relating to by-product disposal or reuse.

  3. Design of a micro-irrigation system based on the control volume method

    Directory of Open Access Journals (Sweden)

    Chasseriaux G.

    2006-01-01

    Full Text Available A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fl uid hydrodynamicʼs are applied. Control volume method is an iterative method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average fl ow rate of the network. The design consists of exploring an economical and effi cient network to deliver uniformly the input fl ow rate for all emitters. This program permitted the design of a large complex network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for lateral design, the results from which were validated by other methods as fi nite element method, so it permits to determine the optimal design for such micro-irrigation network

  4. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  5. High average power CO II laser MOPA system for Tin target LPP EUV light source

    Science.gov (United States)

    Ariga, Tatsuya; Hoshino, Hideo; Endo, Akira

    2007-02-01

    Extreme ultraviolet lithography (EUVL) is the candidate for next generation lithography to be introduced by the semiconductor industry to HVM (high volume manufacturing) in 2013. The power of the EUVL light source has to be at least 115W at a wavelength of 13.5nm. A laser produced plasma (LPP) is the main candidate for this light source but a cost effective laser driver is the key requirement for the realization of this concept. We are currently developing a high power and high repetition rate CO II laser system to achieve 50 W intermediate focus EUV power with a Tin droplet target. We have achieved CE of 2.8% with solid Tin wire target by a transversely excited atmospheric (TEA) CO II laser MOPA system with pulse width, pulse energy and pulse repetition rate as 10~15 ns, 30 mJ and 10 Hz, respectively. A CO II laser system with a short pulse length less than 15 ns, a nominal average power of a few kW, and a repetition rate of 100 kHz, based on RF-excited, fast axial flow CO II laser amplifiers is under development. Output power of about 3 kW has been achieved with a pulse length of 15 ns at 130 kHz repletion rate in a small signal amplification condition with P(20) single line. The phase distortion of the laser beam after amplification is negligible and the beam can be focused to about 150μm diameter in 1/e2. The CO II laser system is reported on short pulse amplification performance using RF-excited fast axial flow lasers as amplifiers. And the CO II laser average output power scaling is shown towards 5~10 kW with pulse width of 15 ns from a MOPA system.

  6. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  7. Renormalized Volume

    CERN Document Server

    Gover, A Rod

    2016-01-01

    For any conformally compact manifold with hypersurface boundary we define a canonical renormalized volume functional and compute an explicit, holographic formula for the corresponding anomaly. For the special case of asymptotically Einstein manifolds, our method recovers the known results. The anomaly does not depend on any particular choice of regulator, but the coefficients of divergences do. We give explicit formulae for these divergences valid for any choice of regulating hypersurface; these should be relevant to recent studies of quantum corrections to entanglement entropies. The anomaly is expressed as a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. We show that the variation of these energy functionals is exactly the obstruction to solving a singular Yamabe type problem with boundary data along the...

  8. 42 CFR 414.904 - Average sales price as the basis for payment.

    Science.gov (United States)

    2010-10-01

    ... acquisition cost as determined by the Inspector General report as required by section 623(c) of the Medicare... the Act. (3) Widely available market price and average manufacturer price. If the Inspector General... influenza vaccine and are calculated using 95 percent of the average wholesale price. (2) Infusion...

  9. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  10. Potential of high-average-power solid state lasers

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  11. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  12. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  13. The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)

    1992-01-01

    Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)

  14. Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.

    Science.gov (United States)

    Holm, Darryl D.

    2002-06-01

    We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.

  15. Seeing and feeling volumes: The influence of shape on volume perception.

    Science.gov (United States)

    Kahrimanovic, Mirela; Bergmann Tiest, Wouter M; Kappers, Astrid M L

    2010-07-01

    The volume of common objects can be perceived visually, haptically or by a combination of both senses. The present study shows large effects of the object's shape on volume perception within all these modalities, with an average bias of 36%. In all conditions, the volume of a tetrahedron was overestimated compared to that of a cube or a sphere, and the volume of a cube was overestimated compared to that of a sphere. Additional analyses revealed that the biases could be explained by the dependence of the volume judgment on different geometric properties. During visual volume perception, the strategies depended on the objects that were compared and they were also subject-dependent. However, analysis of the haptic and bimodal data showed more consistent results and revealed that surface area of the stimuli influenced haptic as well as bimodal volume perception. This suggests that bimodal volume perception is more influenced by haptic input than by visual information.

  16. 28W average power hydrocarbon-free rubidium diode pumped alkali laser.

    Science.gov (United States)

    Zweiback, Jason; Krupke, William F

    2010-01-18

    We present experimental results for a high-power diode pumped hydrocarbon-free rubidium laser with a scalable architecture. The laser consists of a liquid cooled, copper waveguide which serves to both guide the pump light and to provide a thermally conductive surface near the gain volume to remove heat. A laser diode stack, with a linewidth narrowed to approximately 0.35 nm with volume bragg gratings, is used to pump the cell. We have achieved 24W average power output using 4 atmospheres of naturally occurring helium ((4)He) as the buffer gas and 28W using 2.8 atmospheres of (3)He.

  17. Average radiation widths and the giant dipole resonance width

    Energy Technology Data Exchange (ETDEWEB)

    Arnould, M.; Thielemann, F.K.

    1982-11-01

    The average E1 radiation width can be calculated in terms of the energy Esub(G) and width GAMMAsub(G) of the Giant Dipole Resonance (GDR). While various models can predict Esub(G) quite reliably, the theoretical situation regarding ..lambda..sub(G) is much less satisfactory. We propose a simple phenomenological model which is able to provide GAMMAsub(G) values in good agreement with experimental data for spherical or deformed intermediate and heavy nuclei. In particular, this model can account for shell effects in GAMMAsub(G), and can be used in conjunction with the droplet model. The GAMMAsub(G) values derived in such a way are used to compute average E1 radiation widths which are quite close to the experimental values. The method proposed for the calculation of GAMMAsub(G) also appears to be well suited when the GDR characteristics of extended sets of nuclei are required, as is namely the case in nuclear astrophysics.

  18. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  19. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    Science.gov (United States)

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  20. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  1. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more engine...

  2. 7 CFR 51.577 - Average midrib length.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  3. 7 CFR 760.640 - National average market price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  4. A model for light distribution and average solar irradiance inside outdoor tubular photobioreactors for the microalgal mass culture

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, F.G.A.; Camacho, F.G.; Perez, J.A.S.; Sevilla, J.M.F.; Grima, E.M. [Univ. of Almeria (Spain). Dept. of Chemical Engineering

    1997-09-05

    A mathematical model to estimate the solar irradiance profile and average light intensity inside a tubular photobioreactor under outdoor conditions is proposed, requiring only geographic, geometric, and solar position parameters. First, the length of the path into the culture traveled by any direct or disperse ray of light was calculated as the function of three variables: day of year, solar hour, and geographic latitude. Then, the phenomenon of light attenuation by biomass was studied considering Lambert-Beer`s law (only considering absorption) and the monodimensional model of Cornet et al. (1900) (considering absorption and scattering phenomena). Due to the existence of differential wavelength absorption, none of the literature models are useful for explaining light attenuation by the biomass. Therefore, an empirical hyperbolic expression is proposed. The equations to calculate light path length were substituted in the proposed hyperbolic expression, reproducing light intensity data obtained in the center of the loop tubes. The proposed model was also likely to estimate the irradiance accurately at any point inside the culture. Calculation of the local intensity was thus extended to the full culture volume in order to obtain the average irradiance, showing how the higher biomass productivities in a Phaeodactylum tricornutum UTEX 640 outdoor chemostat culture could be maintained by delaying light limitation.

  5. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  6. On the relation between uncertainties of weighted frequency averages and the various types of Allan deviations

    CERN Document Server

    Benkler, Erik; Sterr, Uwe

    2015-01-01

    The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...

  7. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  8. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  9. 42 CFR 495.306 - Establishing patient volume.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Establishing patient volume. 495.306 Section 495... PROGRAM Requirements Specific to the Medicaid Program § 495.306 Establishing patient volume. (a) General rule. A Medicaid provider must annually meet patient volume requirements of § 495.304, as...

  10. Genetic and Environmental Contributions to the Relationships between Brain Structure and Average Lifetime Cigarette Use

    Science.gov (United States)

    Prom-Wormley, Elizabeth; Maes, Hermine H.M.; Schmitt, J. Eric; Panizzon, Matthew S.; Xian, Hong; Eyler, Lisa T.; Franz, Carol E.; Lyons, Michael J.; Tsuang, Ming T.; Dale, Anders M.; Fennema-Notestine, Christine; Kremen, William S.; Neale, Michael C.

    2015-01-01

    Chronic cigarette use has been consistently associated with differences in the neuroanatomy of smokers relative to nonsmokers in case-control studies. However, the etiology underlying the relationships between brain structure and cigarette use is unclear. A community-based sample of male twin pairs ages 51-59 (110 monozygotic pairs, 92 dizygotic pairs) was used to determine the extent to which there are common genetic and environmental influences between brain structure and average lifetime cigarette use. Brain structure was measured by high-resolution structural magnetic resonance imaging, from which subcortical volume and cortical volume, thickness and surface area were derived. Bivariate genetic models were fitted between these measures and average lifetime cigarette use measured as cigarette pack-years. Widespread, negative phenotypic correlations were detected between cigarette pack-years and several cortical as well as subcortical structures. Shared genetic and unique environmental factors contributed to the phenotypic correlations shared between cigarette pack-years and subcortical volume as well as cortical volume and surface area. Brain structures involved in many of the correlations were previously reported to play a role in specific aspects of networks of smoking-related behaviors. These results provide evidence for conducting future research on the etiology of smoking-related behaviors using measures of brain morphology. PMID:25690561

  11. The average crossing number of equilateral random polygons

    Science.gov (United States)

    Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.

    2003-11-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.

  12. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  13. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  14. Average American 15 Pounds Heavier Than 20 Years Ago

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  15. Trait valence and the better-than-average effect.

    Science.gov (United States)

    Gold, Ron S; Brown, Mark G

    2011-12-01

    People tend to regard themselves as having superior personality traits compared to their average peer. To test whether this "better-than-average effect" varies with trait valence, participants (N = 154 students) rated both themselves and the average student on traits constituting either positive or negative poles of five trait dimensions. In each case, the better-than-average effect was found, but trait valence had no effect. Results were discussed in terms of Kahneman and Tversky's prospect theory.

  16. Investigating Averaging Effect by Using Three Dimension Spectrum

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.

  17. Average of Distribution and Remarks on Box-Splines

    Institute of Scientific and Technical Information of China (English)

    LI Yue-sheng

    2001-01-01

    A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.

  18. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust...

  19. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    Science.gov (United States)

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  20. Perturbation resilience and superiorization methodology of averaged mappings

    Science.gov (United States)

    He, Hongjin; Xu, Hong-Kun

    2017-04-01

    We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.

  1. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  2. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Science.gov (United States)

    2011-02-03

    ... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...

  3. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...

  4. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families to...

  5. 27 CFR 19.37 - Average effective tax rate.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  6. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  7. 7 CFR 1410.44 - Average adjusted gross income.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  8. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  9. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....

  10. 34 CFR 668.196 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  11. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  12. 34 CFR 668.215 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  13. Lagrangian theory of structure formation in relativistic cosmology II: average properties of a generic evolution model

    CERN Document Server

    Buchert, Thomas; Wiegand, Alexander

    2013-01-01

    Kinematical and dynamical properties of a generic inhomogeneous cosmological model, spatially averaged with respect to free-falling (generalized fundamental) observers, are investigated for the matter model `irrotational dust'. Paraphrasing a previous Newtonian investigation, we present a relativistic generalization of a backreaction model based on volume-averaging the `Relativistic Zel'dovich Approximation'. In this model we investigate the effect of `kinematical backreaction' on the evolution of cosmological parameters as they are defined in an averaged inhomogenous cosmology, and we show that the backreaction model interpolates between orthogonal symmetry properties by covering subcases of the plane-symmetric solution, the Lemaitre-Tolman-Bondi solution and the Szekeres solution. We so obtain a powerful model that lays the foundations for quantitatively addressing curvature inhomogeneities as they would be interpreted as `Dark Energy' or `Dark Matter' in a quasi-Newtonian cosmology. The present model, havi...

  14. Source of non-arrhenius average relaxation time in glass-forming liquids

    DEFF Research Database (Denmark)

    Dyre, Jeppe

    1998-01-01

    then discuss a recently proposed model according to which the activation energy of the average relaxation time is determined by the work done in shoving aside the surrounding liquid to create space needed for a "flow event". In this model, which is based on the fact that intermolecular interactions......A major mystery of glass-forming liquids is the non-Arrhenius temperature-dependence of the average relaxation time. This paper briefly reviews the classical phenomenological models for non-Arrhenius behavior – the free volume model and the entropy model – and critiques against these models. We...... are anharmonic, the non-Arrhenius temperature-dependence of the average relaxation time is a consequence of the fact that the instantaneous shear modulus increases upon cooling....

  15. Status and Habitat Requirements of the White Sturgeon Populations in the Columbia River Downstream from McNary Dam Volume II; Supplemental Papers and Data Documentation, 1986-1992 Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Beamesderfer, Raymond C.; Nigro, Anthony A. [Oregon Dept. of Fish and Wildlife, Clackamas, OR (US)

    1995-01-01

    This is the final report for research on white sturgeon Acipenser transmontanus from 1986--92 and conducted by the National Marine Fisheries Service (NMFS), Oregon Department of Fish and Wildlife (ODFW), US Fish and Wildlife Service (USFWS), and Washington Department of Fisheries (WDF). Findings are presented as a series of papers, each detailing objectives, methods, results, and conclusions for a portion of this research. This volume includes supplemental papers which provide background information needed to support results of the primary investigations addressed in Volume 1. This study addresses measure 903(e)(1) of the Northwest Power Planning Council's 1987 Fish and Wildlife Program that calls for ''research to determine the impact of development and operation of the hydropower system on sturgeon in the Columbia River Basin.'' Study objectives correspond to those of the ''White Sturgeon Research Program Implementation Plan'' developed by BPA and approved by the Northwest Power Planning Council in 1985. Work was conducted on the Columbia River from McNary Dam to the estuary.

  16. Size and average density spectra of macromolecules obtained from hydrodynamic data.

    Science.gov (United States)

    Pavlov, G M

    2007-02-01

    It is proposed to normalize the Mark-Kuhn-Houwink-Sakurada type of equation relating the hydrodynamic characteristics, such as intrinsic viscosity, velocity sedimentation coefficient and translational diffusion coefficient of linear macromolecules to their molecular masses for the values of linear density M(L) and the statistical segment length A. When the set of data covering virtually all known experimental information is normalized for M(L), it is presented as a size spectrum of linear polymer molecules. Further normalization for the A value reduces all data to two regions: namely the region exhibiting volume interactions and that showing hydrodynamic draining. For chains without intachain excluded volume effects these results may be reproduced using the Yamakawa-Fujii theory of wormlike cylinders. Data analyzed here cover a range of contour lengths of linear chains varying by three orders of magnitude, with the range of statistical segment lengths varying approximately 500 times. The plot of the dependence of [eta]M on M represents the spectrum of average specific volumes occupied by linear and branched macromolecules. Dendrimers and globular proteins for which the volume occupied by the molecule in solution is directly proportional to M have the lowest specific volume. The homologous series of macromolecules in these plots are arranged following their fractal dimensionality.

  17. Energy and average power scalable optical parametric chirped-pulse amplification in yttrium calcium oxyborate.

    Science.gov (United States)

    Liao, Zhi M; Jovanovic, Igor; Ebbers, Chris A; Fei, Yiting; Chai, Bruce

    2006-05-01

    Optical parametric chirped-pulse amplification (OPCPA) in nonlinear crystals has the potential to produce extremes of peak and average power but is limited either in energy by crystal growth issues or in average power by crystal thermo-optic characteristics. Recently, large (7.5 cm diameter x 25 cm length) crystals of yttrium calcium oxyborate (YCOB) have been grown and utilized for high-average-power second-harmonic generation. Further, YCOB has the necessary thermo-optic properties required for scaling OPCPA systems to high peak and average power operation for wavelengths near 1 microm. We report what is believed to be the first use of YCOB for OPCPA. Scalability to higher peak and average power is addressed.

  18. On the averaging of ratios of specific heats in a multicomponent planetary atmosphere

    Science.gov (United States)

    Dubisch, R.

    1974-01-01

    The use of adiabatic relations in the calculation of planetary atmospheres requires knowledge of the ratio of specific heats of a mixture of gases under various pressure and temperature conditions. It is shown that errors introduced by simple averaging of the ratio of specific heats in a multicomponent atmosphere can be roughly 0.4%. Therefore, the gamma-averaging error can become important when integrating through the atmosphere to a large depth.

  19. Measuring skew in average surface roughness as a function of surface preparation

    Science.gov (United States)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  20. Fluctuations of trading volume in a stock market

    Science.gov (United States)

    Hong, Byoung Hee; Lee, Kyoung Eun; Hwang, Jun Kyung; Lee, Jae Woo

    2009-03-01

    We consider the probability distribution function of the trading volume and the volume changes in the Korean stock market. The probability distribution function of the trading volume shows double peaks and follows a power law, P(V/)∼( at the tail part of the distribution with α=4.15(4) for the KOSPI (Korea composite Stock Price Index) and α=4.22(2) for the KOSDAQ (Korea Securities Dealers Automated Quotations), where V is the trading volume and is the monthly average value of the trading volume. The second peaks originate from the increasing trends of the average volume. The probability distribution function of the volume changes also follows a power law, P(Vr)∼Vr-β, where Vr=V(t)-V(t-T) and T is a time lag. The exponents β depend on the time lag T. We observe that the exponents β for the KOSDAQ are larger than those for the KOSPI.

  1. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  2. A novel approach for the averaging of magnetocardiographically recorded heart beats

    Science.gov (United States)

    Di Pietro Paolo, D.; Müller, H.-P.; Erné, S. N.

    2005-05-01

    Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.

  3. High volume data storage architecture analysis

    Science.gov (United States)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  4. Averaging VMAT treatment plans for multi-criteria navigation

    CERN Document Server

    Craft, David; Unkelbach, Jan

    2013-01-01

    The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.

  5. Averaging and exact perturbations in LTB dust models

    CERN Document Server

    Sussman, Roberto A

    2012-01-01

    We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...

  6. A Flight Investigation of Control, Display, and Guidance Requirements for Decelerating Descending VTOL Instrument Transitions using the X-22A Variable Stability Aircraft. Volume 1. Technical Discussion and Results

    Science.gov (United States)

    1975-09-01

    airspeed/ground speed switching logic, configuration change command. 5.0 Selection of control system types , individual system characteristics...all five control system types in combination with the three most sophisticated display presentations is intended to provide some guidance in...deceleration profiles. 2. The required dynamic characteristics of the generic control system types investigated in this experiment should be

  7. Thermodynamic volume of cosmological solitons

    Science.gov (United States)

    Mbarek, Saoussen; Mann, Robert B.

    2017-02-01

    We present explicit expressions of the thermodynamic volume inside and outside the cosmological horizon of Eguchi-Hanson solitons in general odd dimensions. These quantities are calculable and well-defined regardless of whether or not the regularity condition for the soliton is imposed. For the inner case, we show that the reverse isoperimetric inequality is not satisfied for general values of the soliton parameter a, though a narrow range exists for which the inequality does hold. For the outer case, we find that the mass Mout satisfies the maximal mass conjecture and the volume is positive. We also show that, by requiring Mout to yield the mass of dS spacetime when the soliton parameter vanishes, the associated cosmological volume is always positive.

  8. Thermodynamic Volume of Cosmological Solitons

    CERN Document Server

    Mbarek, Saoussen

    2016-01-01

    We present explicit expressions of the thermodynamic volume inside and outside the cosmological horizon of Eguchi-Hanson solitons in general odd dimensions. These quantities are calculable and well-defined regardless of whether or not the regularity condition for the soliton is imposed. For the inner case, we show that the reverse isoperimetric inequality is not satisfied for general values of the soliton parameter $a$, though a narrow range exists for which the inequality does hold. For the outer case, we find that the mass $M_{out}$ satisfies the maximal mass conjecture and the volume is positive. We also show that, by requiring $M_{out}$ to yield the mass of dS spacetime when the soliton parameter vanishes, the associated cosmological volume is always positive.

  9. Distributed Weighted Parameter Averaging for SVM Training on Big Data

    OpenAIRE

    Das, Ayan; Bhattacharya, Sourangshu

    2015-01-01

    Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...

  10. On the average crosscap number Ⅱ: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    Yi-chao CHEN; Yan-pei LIU

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  11. On the average crosscap number II: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  12. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  13. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  14. Practical definition of averages of tensors in general relativity

    CERN Document Server

    Boero, Ezequiel F

    2016-01-01

    We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.

  15. Optimal and fast rotational alignment of volumes with missing data in Fourier space.

    Science.gov (United States)

    Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E

    2013-11-01

    Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space.

  16. Image plane sweep volume illumination.

    Science.gov (United States)

    Sundén, Erik; Ynnerman, Anders; Ropinski, Timo

    2011-12-01

    In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.

  17. Ovarian volume throughout life

    DEFF Research Database (Denmark)

    Kelsey, Thomas W; Dodwell, Sarah K; Wilkinson, A Graham

    2013-01-01

    cancer. To date there is no normative model of ovarian volume throughout life. By searching the published literature for ovarian volume in healthy females, and using our own data from multiple sources (combined n=59,994) we have generated and robustly validated the first model of ovarian volume from...... to about 2.8 mL (95% CI 2.7-2.9 mL) at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis...

  18. Experimental analysis of fuzzy controlled energy efficient demand controlled ventilation economizer cycle variable air volume air conditioning system

    Directory of Open Access Journals (Sweden)

    Rajagopalan Parameshwaran

    2008-01-01

    Full Text Available In the quest for energy conservative building design, there is now a great opportunity for a flexible and sophisticated air conditioning system capable of addressing better thermal comfort, indoor air quality, and energy efficiency, that are strongly desired. The variable refrigerant volume air conditioning system provides considerable energy savings, cost effectiveness and reduced space requirements. Applications of intelligent control like fuzzy logic controller, especially adapted to variable air volume air conditioning systems, have drawn more interest in recent years than classical control systems. An experimental analysis was performed to investigate the inherent operational characteristics of the combined variable refrigerant volume and variable air volume air conditioning systems under fixed ventilation, demand controlled ventilation, and combined demand controlled ventilation and economizer cycle techniques for two seasonal conditions. The test results of the variable refrigerant volume and variable air volume air conditioning system for each techniques are presented. The test results infer that the system controlled by fuzzy logic methodology and operated under the CO2 based mechanical ventilation scheme, effectively yields 37% and 56% per day of average energy-saving in summer and winter conditions, respectively. Based on the experimental results, the fuzzy based combined system can be considered to be an alternative energy efficient air conditioning scheme, having significant energy-saving potential compared to the conventional constant air volume air conditioning system.

  19. Disc volume reduction with percutaneous nucleoplasty in an animal model.

    Directory of Open Access Journals (Sweden)

    Richard Kasch

    Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.

  20. Measurements of real-world vehicle CO and NOx fleet average emissions in urban tunnels of two cities in China

    Science.gov (United States)

    Deng, Yiwen; Chen, Chao; Li, Qiong; Hu, Qinqiang; Yuan, Haoting; Li, Junmei; Li, Yan

    2015-12-01

    Urban tunnels located in the city center areas, can alleviate traffic pressure and provide more convenient traffic for people. Vehicles emit pollutants that are significant contributors to air pollution inside and at the outlet of tunnels. Ventilation is the most widely used method to dilute pollutants in tunnels. To calculate the design required air volume flow accurately, vehicle emissions should be exactly determined. Emission factors are important parameters to estimate vehicle emissions. To characterize carbon monoxide (CO) and nitrogen oxides (NOX) emission factors for a mixed vehicle fleet under real-world driving conditions of urban China, we measured CO and NOX concentrations in Shanghai East Yan'an Road tunnel and Changsha Yingpan Road tunnel in 2012 and 2013. In-use fleet average CO and NOX emission factors were calculated according to tunnel pollutants mass balance models. The results showed that the maximum CO concentration in August was 86 ppm, while in October it was 45 ppm in Shanghai East Yan'an Road tunnel. The maximum concentrations of CO and NOX were 33 ppm and 2 ppm in Changsha Yingpan Road tunnel, respectively. In-use fleet average CO emission factors of East Yan'an Road tunnel, with gradient of -3% ∼ 3%, were 1.266 (±0.889) ∼ 3.974 (±2.189) g km-1 vehicle-1. In-use fleet average CO and NOX emission factors of Yingpan Road tunnel with gradient of -6% ∼ 6% amounted to 0.754 (±0.561) ∼ 6.050 (±5.940) g km-1 vehicle-1 and 0.121 (±0.022) ∼ 0.818 (±0.755) g km-1 vehicle-1, respectively. The dependences of CO and NOX emission on roadway gradient and vehicle speed were found. The average CO and NOX emission factors increased with the ascending of roadway gradient as well as reverse with vehicle speed. These findings provide meaningful reference for ventilation design and environmental assessment of urban tunnels, and further help provide basic data to formulate relevant standards and norms.

  1. Variational theory of average-atom and superconfigurations in quantum plasmas.

    Science.gov (United States)

    Blenski, T; Cichocki, B

    2007-05-01

    Models of screened ions in equilibrium plasmas with all quantum electrons are important in opacity and equation of state calculations. Although such models have to be derived from variational principles, up to now existing models have not been fully variational. In this paper a fully variational theory respecting virial theorem is proposed-all variables are variational except the parameters defining the equilibrium, i.e., the temperature T, the ion density ni and the atomic number Z. The theory is applied to the quasiclassical Thomas-Fermi (TF) atom, the quantum average atom (QAA), and the superconfigurations (SC) in plasmas. Both the self-consistent-field (SCF) equations for the electronic structure and the condition for the mean ionization Z* are found from minimization of a thermodynamic potential. This potential is constructed using the cluster expansion of the plasma free energy from which the zero and the first-order terms are retained. In the zero order the free energy per ion is that of the quantum homogeneous plasma of an unknown free-electron density n0 = Z* ni occupying the volume 1/ni. In the first order, ions submerged in this plasma are considered and local neutrality is assumed. These ions are considered in the infinite space without imposing the neutrality of the Wigner-Seitz (WS) cell. As in the Inferno model, a central cavity of a radius R is introduced, however, the value of R is unknown a priori. The charge density due to noncentral ions is zero inside the cavity and equals en0 outside. The first-order contribution to free energy per ion is the difference between the free energy of the system "central ion+infinite plasma" and the free energy of the system "infinite plasma." An important part of the approach is an "ionization model" (IM), which is a relation between the mean ionization charge Z* and the first-order structure variables. Both the IM and the local neutrality are respected in the minimization procedure. The correct IM in the TF case

  2. The average visual response in patients with cerebrovascular disease

    NARCIS (Netherlands)

    Oostehuis, H.J.G.H.; Ponsen, E.J.; Jonkman, E.J.; Magnus, O.

    1969-01-01

    The average visual response (AVR) was recorded in thirty patients after a cerebrovascular accident and in fourteen control subjects from the same age group. The AVR was obtained with the aid of a 16-channel EEG machine, a Computer of Average Transients and a tape recorder with 13 FM channels. This

  3. Charging for computer usage with average cost pricing

    CERN Document Server

    Landau, K

    1973-01-01

    This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).

  4. On the Average-Case Complexity of Shellsort

    NARCIS (Netherlands)

    Vitányi, P.M.B.

    2015-01-01

    We prove a lower bound expressed in the increment sequence on the average-case complexity (number of inversions which is proportional to the running time) of Shellsort. This lower bound is sharp in every case where it could be checked. We obtain new results e.g. determining the average-case complexi

  5. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  6. Analytic computation of average energy of neutrons inducing fission

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Alexander Rich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-12

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  7. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  8. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  9. A Simple Geometrical Derivation of the Spatial Averaging Theorem.

    Science.gov (United States)

    Whitaker, Stephen

    1985-01-01

    The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)

  10. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  11. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  12. 7 CFR 701.17 - Average adjusted gross income limitation.

    Science.gov (United States)

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  13. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...

  14. (Average-) convexity of common pool and oligopoly TU-games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Meinhardt, H.

    2000-01-01

    The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele

  15. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    蒋艳杰

    2000-01-01

    This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  16. Remarks on the Lower Bounds for the Average Genus

    Institute of Scientific and Technical Information of China (English)

    Yi-chao Chen

    2011-01-01

    Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.

  17. Delineating the Average Rate of Change in Longitudinal Models

    Science.gov (United States)

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  18. Time-dependent density functional theory with twist-averaged boundary conditions

    CERN Document Server

    Schuetrumpf, B; Reinhard, P -G

    2016-01-01

    Time-dependent density functional theory is widely used to describe excitations of many-fermion systems. In its many applications, 3D coordinate-space representation is used, and infinite-domain calculations are limited to a finite volume represented by a box. For finite quantum systems (atoms, molecules, nuclei), the commonly used periodic or reflecting boundary conditions introduce spurious quantization of the continuum states and artificial reflections from boundary; hence, an incorrect treatment of evaporated particles. These artifacts can be practically cured by introducing absorbing boundary conditions (ABC) through an absorbing potential in a certain boundary region sufficiently far from the described system. But also the calculations of infinite matter (crystal electrons, quantum fluids, neutron star crust) suffer artifacts from a finite computational box. In this regime, twist- averaged boundary conditions (TABC) have been used successfully to diminish the finite-volume effects. In this work, we exte...

  19. Vibrational resonance: a study with high-order word-series averaging

    CERN Document Server

    Murua, Ander

    2016-01-01

    We study a model problem describing vibrational resonance by means of a high-order averaging technique based on so-called word series. With the tech- nique applied here, the tasks of constructing the averaged system and the associ- ated change of variables are divided into two parts. It is first necessary to build recursively a set of so-called word basis functions and, after that, all the required manipulations involve only scalar coefficients that are computed by means of sim- ple recursions. As distinct from the situation with other approaches, with word- series, high-order averaged systems may be derived without having to compute the associated change of variables. In the system considered here, the construction of high-order averaged systems makes it possible to obtain very precise approxima- tions to the true dynamics.

  20. Average cross-responses in correlated financial markets

    Science.gov (United States)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  1. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  2. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    Energy Technology Data Exchange (ETDEWEB)

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  3. SPARSE: A Subgrid Particle Averaged Reynolds Stress Equivalent Model: Testing with A Priori Closure

    CERN Document Server

    Davis, Sean; Sen, Oishik; Udaykumar, H S

    2016-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the sub-particle-cloud scale on the averaged motion and velocity of the cloud. The SPARSE (Subgrid Particle Average Reynolds Stress Equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point particle model. Comparison of a first-order model and SPARSE with the reference simulation in one-dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulenc...

  4. 16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING...

  5. The Effects of Use of Average Instead of Daily Weather Data in Crop Growth Simulation Models

    NARCIS (Netherlands)

    Nonhebel, Sanderine

    1994-01-01

    Development and use of crop growth simulation models has increased in the last decades. Most crop growth models require daily weather data as input values. These data are not easy to obtain and therefore in many studies daily data are generated, or average values are used as input data for these

  6. 47 CFR 36.622 - National and study area average unseparated loop costs.

    Science.gov (United States)

    2010-10-01

    ... nationwide average shall be used in determining the additional interstate expense allocation for companies... reflect the update filings shall not affect the amount of the additional interstate expense allocation for... study area. (1) If a company elects to, or is required to, update the data which it has filed with...

  7. OSCILLATION RESULTS RELATED TO INTEGRAL AVERAGING TECHNIQUE FOR EVEN ORDER NEUTRAL DIFFERENTIAL EQUATION WITH DEVIATING ARGUMENTS

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we study an even order neutral differential equation with deviating arguments, and obtain new oscillation results without the assumptions which were required for related results given before. Our results extend and improve many known oscillation criteria, based on the standard integral averaging technique.

  8. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Volume 2, Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure: Appendices, Final report

    Energy Technology Data Exchange (ETDEWEB)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.

  9. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure, Volume 1, Final report

    Energy Technology Data Exchange (ETDEWEB)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N. [Pacific Northwest Lab., Richland, WA (United States)

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the {prime}978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.

  10. Mean nuclear volume

    DEFF Research Database (Denmark)

    Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.

    1999-01-01

    We evaluated the following nine parameters with respect to their prognostic value in females with endometrial cancer: four stereologic parameters [mean nuclear volume (MNV), nuclear volume fraction, nuclear index and mitotic index], the immunohistochemical expression of cancer antigen (CA125...

  11. APPLICATION OF FDS SCHEME TO 2D DEPTH-AVERAGED FLOW-POLLUTANTS SIMULATION

    Institute of Scientific and Technical Information of China (English)

    Zhang Li-qiong; Zhao Di-hua; Lai Jihn-sung; Yao Qi; Xiao Jun-ying

    2003-01-01

    A Fulx Difference Splitting (FDS) scheme was used in a 2D depth-averaged flow-pollutant model. Within the framework of the Finite Volume Method (FVM) a 2D simulation was transferred into solving a series of local 1D problems based on the rotational invariance property of the flux. The FDS scheme was employed to estimate the normal numerical flux of variables including water mass, momentum and pollutant concentration across the interface between cells. The scheme was checked with exact solutions and verified by observations in the Nantong reach of the Yangtze River. Calculated results well match both exact solutions and observations.

  12. Quantifying Water Stress Using Total Water Volumes and GRACE

    Science.gov (United States)

    Richey, A. S.; Famiglietti, J. S.; Druffel-Rodriguez, R.

    2011-12-01

    Water will follow oil as the next critical resource leading to unrest and uprisings globally. To better manage this threat, an improved understanding of the distribution of water stress is required today. This study builds upon previous efforts to characterize water stress by improving both the quantification of human water use and the definition of water availability. Current statistics on human water use are often outdated or inaccurately reported nationally, especially for groundwater. This study improves these estimates by defining human water use in two ways. First, we use NASA's Gravity Recovery and Climate Experiment (GRACE) to isolate the anthropogenic signal in water storage anomalies, which we equate to water use. Second, we quantify an ideal water demand by using average water requirements for the domestic, industrial, and agricultural water use sectors. Water availability has traditionally been limited to "renewable" water, which ignores large, stored water sources that humans use. We compare water stress estimates derived using either renewable water or the total volume of water globally. We use the best-available data to quantify total aquifer and surface water volumes, as compared to groundwater recharge and surface water runoff from land-surface models. The work presented here should provide a more realistic image of water stress by explicitly quantifying groundwater, defining water availability as total water supply, and using GRACE to more accurately quantify water use.

  13. Non-chain pulsed DF laser with an average power of the order of 100 W

    Science.gov (United States)

    Pan, Qikun; Xie, Jijiang; Wang, Chunrui; Shao, Chunlei; Shao, Mingzhen; Chen, Fei; Guo, Jin

    2016-07-01

    The design and performance of a closed-cycle repetitively pulsed DF laser are described. The Fitch circuit and thyratron switch are introduced to realize self-sustained volume discharge in SF6-D2 mixtures. The influences of gas parameters and charging voltage on output characteristics of non-chain pulsed DF laser are experimentally investigated. In order to improve the laser power stability over a long period of working time, zeolites with different apertures are used to scrub out the de-excitation particles produced in electric discharge. An average output power of the order of 100 W was obtained at an operating repetition rate of 50 Hz, with amplitude difference in laser pulses <8 %. And under the action of micropore alkaline zeolites, the average power fell by 20 % after the laser continuing working 100 s at repetition frequency of 50 Hz.

  14. Exploring the Best Classification from Average Feature Combination

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    Full Text Available Feature combination is a powerful approach to improve object classification performance. While various combination algorithms have been proposed, average combination is almost always selected as the baseline algorithm to be compared with. In previous work we have found that it is better to use only a sample of the most powerful features in average combination than using all. In this paper, we continue this work and further show that the behaviors of features in average combination can be integrated into the k-Nearest-Neighbor (kNN framework. Based on the kNN framework, we then propose to use a selection based average combination algorithm to obtain the best classification performance from average combination. Our experiments on four diverse datasets indicate that this selection based average combination performs evidently better than the ordinary average combination, and thus serves as a better baseline. Comparing with this new and better baseline makes the claimed superiority of newly proposed combination algorithms more convincing. Furthermore, the kNN framework is helpful in understanding the underlying mechanism of feature combination and motivating novel feature combination algorithms.

  15. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  16. The structure, thermodynamic and electrochemical properties of hydrogen-storage alloys: An empirical methodology of average numbers of total electrons

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yanhui [The Institute of Electrochemical Power Sources, Soochow University, Moye Road 688, Suzhou 215006 (China); Ju, Hua [School of Urban Rail Transportation, Soochow University, Ganjiang East Road 178, Suzhou 215021 (China)

    2009-02-15

    It is just the best time to compile and compare the experimental data and to explore the possible laws and to discover the relation between the crystallographic parameters, thermodynamic and electrochemical properties, based on lot of the published experimental data about the hydrogen-storage alloys. An empirical correlation between the unit cell volume and the enthalpy change, equilibrium pressure, discharge capacity has been constructed. The violent change of the equilibrium pressure with the unit cell volume might indicate the change in the interaction nature between the host alloy atoms and the intercalated hydrogen atoms. The dependence of unit cell volume vs. average numbers of total electrons for AB{sub 5}-type alloys exhibits same change tendency as that of the Vanderwaals radius vs. atomic numbers from Fe to Se in the elements' periodic table. It is possible that the total numbers of the electrons decides the unit cell volume. (author)

  17. Unbiased Cultural Transmission in Time-Averaged Archaeological Assemblages

    CERN Document Server

    Madsen, Mark E

    2012-01-01

    Unbiased models are foundational in the archaeological study of cultural transmission. Applications have as- sumed that archaeological data represent synchronic samples, despite the accretional nature of the archaeological record. I document the circumstances under which time-averaging alters the distribution of model predictions. Richness is inflated in long-duration assemblages, and evenness is "flattened" compared to unaveraged samples. Tests of neutrality, employed to differentiate biased and unbiased models, suffer serious problems with Type I error under time-averaging. Finally, the time-scale over which time-averaging alters predictions is determined by the mean trait lifetime, providing a way to evaluate the impact of these effects upon archaeological samples.

  18. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  19. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  20. Percutaneous Ethanol Sclerotherapy of Symptomatic Nodules Is Effective and Safe in Pregnant Women: A Study of 13 Patients with an Average Follow-Up of 6.8 Years

    Directory of Open Access Journals (Sweden)

    Tamas Solymosi

    2015-01-01

    Full Text Available Background. Because of the increased risk of surgery, thyroid nodules causing compression signs and/or hyperthyroidism are concerning during pregnancy. Patients and Methods. Six patients with nontoxic cystic, four with nontoxic solid, and three with overt hyperthyroidism caused by toxic nodules were treated with percutaneous ethanol injection therapy (PEI. An average of 0.68 mL ethanol per 1 mL nodule volume was administered. Mean number of PEI treatments for patients was 2.9. Success was defined as the shrinkage of the nodule by more than 50% of the pretreatment volume (V0 and the normalization of TSH and FT4 levels. The average V0 was 15.3 mL. Short-term success was measured prior to labor, whereas long-term success was determined during the final follow-up (an average of 6.8 years. Results. The pressure symptoms decreased in all but one patient after PEI and did not worsen until delivery. The PEI was successful in 11 (85% and 7 (54% patients at short-term and long-term follow-up, respectively. Three patients underwent repeat PEI which was successful in 2 patients. Conclusions. PEI is a safe tool and seems to have good short-term results in treating selected symptomatic pregnant patients. Long-term success may require repeat PEI.

  1. Mathematical model of diffusion-limited gas bubble dynamics in unstirred tissue with finite volume.

    Science.gov (United States)

    Srinivasan, R Srini; Gerth, Wayne A; Powell, Michael R

    2002-02-01

    Models of gas bubble dynamics for studying decompression sickness have been developed by considering the bubble to be immersed in an extravascular tissue with diffusion-limited gas exchange between the bubble and the surrounding unstirred tissue. In previous versions of this two-region model, the tissue volume must be theoretically infinite, which renders the model inapplicable to analysis of bubble growth in a finite-sized tissue. We herein present a new two-region model that is applicable to problems involving finite tissue volumes. By introducing radial deviations to gas tension in the diffusion region surrounding the bubble, the concentration gradient can be zero at a finite distance from the bubble, thus limiting the tissue volume that participates in bubble-tissue gas exchange. It is shown that these deviations account for the effects of heterogeneous perfusion on gas bubble dynamics, and are required for the tissue volume to be finite. The bubble growth results from a difference between the bubble gas pressure and an average gas tension in the surrounding diffusion region that explicitly depends on gas uptake and release by the bubble. For any given decompression, the diffusion region volume must stay above a certain minimum in order to sustain bubble growth.

  2. Experimental study on the relationship between average isotopic fractionation factor and evaporation rate

    Directory of Open Access Journals (Sweden)

    Tao WANG

    2010-12-01

    Full Text Available Isotopic fractionation is the foundation of tracing water cycle using hydrogen and oxygen isotopes. Isotopic fractionation factors in evaporation from free water body are mainly affected by temperature and relative humidity, and greatly vary with these atmospheric factors in a day. Evaporation rate can properly reveal the effects of atmospheric factors. Therefore, there should be a certain function relationship existing in isotopic fractionation factors and evaporation rate. An average isotopic fractionation factor was defined to describe isotopic differences between vapor and liquid phases in evaporation with time interval of hours or days. The relationship of average isotopic fractionation factor and evaporation based on isotopic mass balance was investigated through an evaporation pan experiment with no inflow. The experimental results showed that the isotopic compositions of residual water became more enrichment with time; the average isotopic fractionation factor was affected by air temperature, relative humidity and other atmospheric factors, and had a good functional relation with evaporation rate. The values of average isotopic fractionation factor could be easily calculated with the known of evaporation rate, the initial volume of water in pan and isotopic compositions of residual water.

  3. The use of noncrystallographic symmetry averaging to solve structures from data affected by perfect hemihedral twinning

    Energy Technology Data Exchange (ETDEWEB)

    Sabin, Charles; Plevka, Pavel, E-mail: pavel.plevka@ceitec.muni.cz [Central European Institute of Technology – Masaryk University, Kamenice 653/25, 625 00 Brno (Czech Republic)

    2016-02-16

    Molecular replacement and noncrystallographic symmetry averaging were used to detwin a data set affected by perfect hemihedral twinning. The noncrystallographic symmetry averaging of the electron-density map corrected errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. Hemihedral twinning is a crystal-growth anomaly in which a specimen is composed of two crystal domains that coincide with each other in three dimensions. However, the orientations of the crystal lattices in the two domains differ in a specific way. In diffraction data collected from hemihedrally twinned crystals, each observed intensity contains contributions from both of the domains. With perfect hemihedral twinning, the two domains have the same volumes and the observed intensities do not contain sufficient information to detwin the data. Here, the use of molecular replacement and of noncrystallographic symmetry (NCS) averaging to detwin a 2.1 Å resolution data set for Aichi virus 1 affected by perfect hemihedral twinning is described. The NCS averaging enabled the correction of errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. The procedure permitted the structure to be determined from a molecular-replacement model that had 16% sequence identity and a 1.6 Å r.m.s.d. for C{sup α} atoms in comparison to the crystallized structure. The same approach could be used to solve other data sets affected by perfect hemihedral twinning from crystals with NCS.

  4. 40 CFR 63.7943 - How do I determine the average VOHAP concentration of my remediation material?

    Science.gov (United States)

    2010-07-01

    ... concentration of my remediation material? 63.7943 Section 63.7943 Protection of Environment ENVIRONMENTAL... Remediation Performance Tests § 63.7943 How do I determine the average VOHAP concentration of my remediation material? (a) General requirements. You must determine the average total VOHAP concentration of a...

  5. Grade-Average Method: A Statistical Approach for Estimating ...

    African Journals Online (AJOL)

    Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...

  6. United States Average Annual Precipitation, 2000-2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...

  7. On the average sensitivity of laced Boolean functions

    CERN Document Server

    jiyou, Li

    2011-01-01

    In this paper we obtain the average sensitivity of the laced Boolean functions. This confirms a conjecture of Shparlinski. We also compute the weights of the laced Boolean functions and show that they are almost balanced.

  8. Distribution of population-averaged observables in stochastic gene expression

    Science.gov (United States)

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-01-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population-averaged gene expression levels as a function of population, or sample, size for several stochastic gene expression models to find out to what extent population-averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function of observables and large deviation theory, we calculate the distribution and first two moments of the population-averaged mRNA and protein levels as a function of model parameters, population size, and number of measurements contained in a data set. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences among different models.

  9. on the performance of Autoregressive Moving Average Polynomial ...

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Moving Average Polynomial Distributed Lag (ARMAPDL) model. The parameters of these models were estimated using least squares and Newton Raphson iterative methods. ..... Global Journal of Mathematics and Statistics. Vol. 1. No.

  10. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  11. The Partial Averaging of Fuzzy Differential Inclusions on Finite Interval

    Directory of Open Access Journals (Sweden)

    Andrej V. Plotnikov

    2014-01-01

    Full Text Available The substantiation of a possibility of application of partial averaging method on finite interval for differential inclusions with the fuzzy right-hand side with a small parameter is considered.

  12. United States Average Annual Precipitation, 2005-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...

  13. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  14. United States Average Annual Precipitation, 1961-1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  15. The average-shadowing property and topological ergodicity for flows

    Energy Technology Data Exchange (ETDEWEB)

    Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)]. E-mail: rbgu@njue.edu.cn; Guo Wenjing [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)

    2005-07-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic.

  16. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  17. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  18. On the average exponent of elliptic curves modulo $p$

    CERN Document Server

    Freiberg, Tristan

    2012-01-01

    Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

  19. United States Average Annual Precipitation, 1995-1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...

  20. United States Average Annual Precipitation, 1990-1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...

  1. United States Average Annual Precipitation, 1990-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...

  2. Does subduction zone magmatism produce average continental crust

    Science.gov (United States)

    Ellam, R. M.; Hawkesworth, C. J.

    1988-01-01

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

  3. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  4. The effect of sensor sheltering and averaging techniques on wind measurements at the Shuttle Landing Facility

    Science.gov (United States)

    Merceret, Francis J.

    1995-01-01

    This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.

  5. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  6. Time averages, recurrence and transience in the stochastic replicator dynamics

    CERN Document Server

    Hofbauer, Josef; 10.1214/08-AAP577

    2009-01-01

    We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates time averages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.

  7. On the relativistic mass function and averaging in cosmology

    CERN Document Server

    Ostrowski, Jan J; Roukema, Boudewijn F

    2016-01-01

    The general relativistic description of cosmological structure formation is an important challenge from both the theoretical and the numerical point of views. In this paper we present a brief prescription for a general relativistic treatment of structure formation and a resulting mass function on galaxy cluster scales in a highly generic scenario. To obtain this we use an exact scalar averaging scheme together with the relativistic generalization of Zel'dovich's approximation (RZA) that serves as a closure condition for the averaged equations.

  8. Use of a Correlation Coefficient for Conditional Averaging.

    Science.gov (United States)

    1997-04-01

    data. Selection of the sine function period and a correlation coefficient threshold are discussed. Also examined are the effects of the period and...threshold level on the number of ensembles captured for inclusion for conditional averaging. Both the selection of threshold correlation coefficient and the...A method of collecting ensembles for conditional averaging is presented that uses data collected from a plane mixing layer. The correlation

  9. Estimation of annual average daily traffic with optimal adjustment factors

    OpenAIRE

    Alonso Oreña, Borja; Moura Berodia, José Luis; Ibeas Portilla, Ángel; Romero Junquera, Juan Pablo

    2014-01-01

    This study aimed to estimate the annual average daily traffic in inter-urban networks determining the best correlation (affinity) between the short period traffic counts and permanent traffic counters. A bi-level optimisation problem is proposed in which an agent in an upper level prefixes the affinities between short period traffic counts and permanent traffic counters stations and looks to minimise the annual average daily traffic calculation error while, in a lower level, an origin–destina...

  10. The averaging of nonlocal Hamiltonian structures in Whitham's method

    Directory of Open Access Journals (Sweden)

    Andrei Ya. Maltsev

    2002-01-01

    Full Text Available We consider the m-phase Whitham's averaging method and propose the procedure of “averaging” nonlocal Hamiltonian structures. The procedure is based on the existence of a sufficient number of local-commuting integrals of the system and gives the Poisson bracket of Ferapontov type for Whitham's system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.

  11. Separability criteria with angular and Hilbert space averages

    Science.gov (United States)

    Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia

    2016-05-01

    The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ - ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.

  12. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  13. Average life of oxygen vacancies of quartz in sediments

    Institute of Scientific and Technical Information of China (English)

    DIAO; Shaobo(刁少波); YE; Yuguang(业渝光)

    2002-01-01

    Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.

  14. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  15. Side chain conformational averaging in human dihydrofolate reductase.

    Science.gov (United States)

    Tuttle, Lisa M; Dyson, H Jane; Wright, Peter E

    2014-02-25

    The three-dimensional structures of the dihydrofolate reductase enzymes from Escherichia coli (ecDHFR or ecE) and Homo sapiens (hDHFR or hE) are very similar, despite a rather low level of sequence identity. Whereas the active site loops of ecDHFR undergo major conformational rearrangements during progression through the reaction cycle, hDHFR remains fixed in a closed loop conformation in all of its catalytic intermediates. To elucidate the structural and dynamic differences between the human and E. coli enzymes, we conducted a comprehensive analysis of side chain flexibility and dynamics in complexes of hDHFR that represent intermediates in the major catalytic cycle. Nuclear magnetic resonance relaxation dispersion experiments show that, in marked contrast to the functionally important motions that feature prominently in the catalytic intermediates of ecDHFR, millisecond time scale fluctuations cannot be detected for hDHFR side chains. Ligand flux in hDHFR is thought to be mediated by conformational changes between a hinge-open state when the substrate/product-binding pocket is vacant and a hinge-closed state when this pocket is occupied. Comparison of X-ray structures of hinge-open and hinge-closed states shows that helix αF changes position by sliding between the two states. Analysis of χ1 rotamer populations derived from measurements of (3)JCγCO and (3)JCγN couplings indicates that many of the side chains that contact helix αF exhibit rotamer averaging that may facilitate the conformational change. The χ1 rotamer adopted by the Phe31 side chain depends upon whether the active site contains the substrate or product. In the holoenzyme (the binary complex of hDHFR with reduced nicotinamide adenine dinucleotide phosphate), a combination of hinge opening and a change in the Phe31 χ1 rotamer opens the active site to facilitate entry of the substrate. Overall, the data suggest that, unlike ecDHFR, hDHFR requires minimal backbone conformational rearrangement as

  16. Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+

    Energy Technology Data Exchange (ETDEWEB)

    Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.

  17. Cesium telluride cathodes for the next generation of high-average current high-brightness photoinjectors

    Energy Technology Data Exchange (ETDEWEB)

    Filippetto, D., E-mail: dfilippetto@lbl.gov; Qian, H.; Sannibale, F. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, California 94720 (United States)

    2015-07-27

    We report on the performances of a Cs{sub 2}Te photocathode under extreme conditions of high peak time-dependent accelerating fields, continuous wave operations, and MHz pulse extraction with up to 0.3 mA average current. The measurements, performed in a normal conducting cavity, show extended lifetime and robustness, elucidate the main mechanisms for cathode degradation, and set the required system vacuum performance for compatibility with the operations of a high average power X-ray free electron laser user facility, opening the doors to the next generation of MHz-scale ultrafast scientific instruments.

  18. Energy requirements

    NARCIS (Netherlands)

    Hulzebos, Christian V.; Sauer, Pieter J. J.

    2007-01-01

    The determination of the appropriate energy and nutritional requirements of a newborn infant requires a clear goal of the energy and other compounds to be administered, valid methods to measure energy balance and body composition, and knowledge of the neonatal metabolic capacities. Providing an appr

  19. Energy requirements

    NARCIS (Netherlands)

    Hulzebos, Christian V.; Sauer, Pieter J. J.

    The determination of the appropriate energy and nutritional requirements of a newborn infant requires a clear goal of the energy and other compounds to be administered, valid methods to measure energy balance and body composition, and knowledge of the neonatal metabolic capacities. Providing an

  20. Gambaran Mean Platelet Volume Pada Pasien Diabetes Melitus Tipe 2 Yang Rawat Inap Di RSUP. Haji Adam Malik Medan Tahun 2014

    OpenAIRE

    Simanjuntak, Fitri

    2016-01-01

    Introduction :Epidemiological studies in various parts of the world showed an inclination of increased incidence and prevalence of Diabetes Mellitus Type 2. Diabetes Mellitus Type 2 is a degenerative disease that requires proper and serious handling because it can cause complications, one of the complications is platelet disorder. Mean Platelet Volume(MPV) is a picture and the average size of platelets in circulation which can be used to assess the activity of platelets. Obj...

  1. Exact Averaging of Stochastic Equations for Flow in Porous Media

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  2. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    Science.gov (United States)

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  3. Site Environmental Report for 1999 - Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Ruggieri, M

    2000-08-12

    Each year, Ernest Orlando Lawrence Berkeley National Laboratory prepares an integrated report on its environmental programs to satisfy the requirements of United States Department of Energy Order 231.1. The Site Environmental Report for 1999 is intended to summarize Berkeley Lab's compliance with environmental standards and requirements, characterize environmental management efforts through surveillance and monitoring activities, and highlight significant programs and efforts for calendar year 1999. The report is separated into two volumes. Volume I contains a general overview of the Laboratory, the status of environmental programs, and summary results from surveillance and monitoring activities. Each chapter in Volume I begins with an outline of the sections that follow, including any tables or figures found in the chapter. Readers should use section numbers (e.g., {section}1.5) as navigational tools to find topics of interest in either the printed or the electronic version of the report. Volume II contains the individual data results from monitoring programs.

  4. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  5. Averaged universe confronted to cosmological observations: a fully covariant approach

    CERN Document Server

    Wijenayake, Tharake; Ishak, Mustapha

    2016-01-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging. That is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaitre-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-know question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of Macroscopic Gravity (MG). We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted $\\Omega_\\mathcal{A}$. We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full CMB analysis from Planck temperature anisotropy and polarization data, the supernovae data from Union 2.1, the galaxy power spectrum from WiggleZ, the...

  6. Perceptual averaging in individuals with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jennifer Elise Corbett

    2016-11-01

    Full Text Available There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value of a visual feature (e.g., mean size appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task despite poor accuracy in recalling individual circle sizes (member task. In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  7. Transversely isotropic higher-order averaged structure tensors

    Science.gov (United States)

    Hashlamoun, Kotaybah; Federico, Salvatore

    2017-08-01

    For composites or biological tissues reinforced by statistically oriented fibres, a probability distribution function is often used to describe the orientation of the fibres. The overall effect of the fibres on the material response is accounted for by evaluating averaging integrals over all possible directions in space. The directional average of the structure tensor (tensor product of the unit vector describing the fibre direction by itself) is of high significance. Higher-order averaged structure tensors feature in several models and carry similarly important information. However, their evaluation has a quite high computational cost. This work proposes to introduce mathematical techniques to minimise the computational cost associated with the evaluation of higher-order averaged structure tensors, for the case of a transversely isotropic probability distribution of orientation. A component expression is first introduced, using which a general tensor expression is obtained, in terms of an orthonormal basis in which one of the vectors coincides with the axis of symmetry of transverse isotropy. Then, a higher-order transversely isotropic averaged structure tensor is written in an appropriate basis, constructed starting from the basis of the space of second-order transversely isotropic tensors, which is constituted by the structure tensor and its complement to the identity.

  8. Weapon container catalog. Volumes 1 & 2

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.A.; Higuera, M.C.

    1998-02-01

    The Weapon Container Catalog describes H-gear (shipping and storage containers, bomb hand trucks and the ancillary equipment required for loading) used for weapon programs and for special use containers. When completed, the catalog will contain five volumes. Volume 1 for enduring stockpile programs (B53, B61, B83, W62, W76, W78, W80, W84, W87, and W88) and Volume 2, Special Use Containers, are being released. The catalog is intended as a source of information for weapon program engineers and also provides historical information. The catalog also will be published on the SNL Internal Web and will undergo periodic updates.

  9. Volume Regulated Channels

    DEFF Research Database (Denmark)

    Klausen, Thomas Kjær

    - serves a multitude of functions in the mammalian cell, regulating the membrane potential (Em), cell volume, protein activity and the driving force for facilitated transporters giving Cl- and Cl- channels a major potential of regulating cellular function. These functions include control of the cell cycle...... of volume perturbations evolution have developed system of channels and transporters to tightly control volume homeostasis. In the past decades evidence has been mounting, that the importance of these volume regulated channels and transporters are not restricted to the defense of cellular volume......, controlled cell death and cellular migration. Volume regulatory mechanisms has long been in focus for regulating cellular proliferation and my thesis work have been focusing on the role of Cl- channels in proliferation with specific emphasis on ICl, swell. Pharmacological blockage of the ubiquitously...

  10. How do children form impressions of persons? They average.

    Science.gov (United States)

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults.

  11. Optimum orientation versus orientation averaging description of cluster radioactivity

    CERN Document Server

    Seif, W M; Refaie, A I; Amer, L H

    2016-01-01

    Background: The deformation of the nuclei involved in the cluster decay of heavy nuclei affect seriously their half-lives against the decay. Purpose: We investigate the description of the different decay stages in both the optimum orientation and the orientation-averaged pictures of the cluster decay process. Method: We consider the decays of 232,233,234U and 236,238Pu isotopes. The quantum mechanical knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. Results: We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. The difference between the two values increases with decreasing the mass number of the emitted cluster. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformati...

  12. Testing averaged cosmology with type Ia supernovae and BAO data

    CERN Document Server

    Santos, B; Devi, N Chandrachani; Alcaniz, J S

    2016-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard $\\Lambda$CDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  13. An ɴ-ary λ-averaging based similarity classifier

    Directory of Open Access Journals (Sweden)

    Kurama Onesfole

    2016-06-01

    Full Text Available We introduce a new n-ary λ similarity classifier that is based on a new n-ary λ-averaging operator in the aggregation of similarities. This work is a natural extension of earlier research on similarity based classification in which aggregation is commonly performed by using the OWA-operator. So far λ-averaging has been used only in binary aggregation. Here the λ-averaging operator is extended to the n-ary aggregation case by using t-norms and t-conorms. We examine four different n-ary norms and test the new similarity classifier with five medical data sets. The new method seems to perform well when compared with the similarity classifier.

  14. Genuine non-self-averaging and ultraslow convergence in gelation

    Science.gov (United States)

    Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  15. The Conservation of Area Integrals in Averaging Transformations

    Science.gov (United States)

    Kuznetsov, E. D.

    2010-06-01

    It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.

  16. Evolution of the average avalanche shape with the universality class.

    Science.gov (United States)

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.

  17. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  18. PROMIS series. Volume 8: Midlatitude ground magnetograms

    Science.gov (United States)

    Fairfield, D. H.; Russell, C. T.

    1990-01-01

    This is the eighth in a series of volumes pertaining to the Polar Region Outer Magnetosphere International Study (PROMIS). This volume contains 24 hour stack plots of 1-minute average, H and D component, ground magnetograms for the period March 10 through June 16, 1986. Nine midlatitude ground stations were selected from the UCLA magnetogram data base that was constructed from all available digitized magnetogram stations. The primary purpose of this publication is to allow users to define universal times and onset longitudes of magnetospheric substorms.

  19. DISTRIBUTION OF RIVER RUNOFF AND ITS CLIMATE FACTORS IN AVERAGE AND EXTREME YEARS

    Directory of Open Access Journals (Sweden)

    Vladimir Konovalov

    2011-01-01

    Full Text Available Schematic maps of spatial distribution of seasonal precipitation amounts and average air temperatures were obtained for the areas studied in years with normal and extreme values of annual river runoff. Data on precipitation for January–December (I–XII and on average air temperatures for June–September (VI–IX during 1961–1990 collected at 93 meteorological stations located along 30.20°–44.08°N and 67.20°–82.98°E, altitude 122–4 169 m above sea level, were used in the maps’ compilation. For each point-element (i.e. a meteorological station with proper data, the ordinates of an integral empirical function of distribution of probabilities P were calculated from these data for a 30-year sample period and for each year were received average values and standard deviations of P. In characteristic years were revealed, significant differences of spatial distribution of climatic factors and runoff. It was found out also that the spatial distribution of the total volume of glaciers melting is less variable in the years with extreme water yields compared to the average years. This peculiarity is very beneficial for hydropower and agriculture sectors because it provides additional natural ability to stabilize water balance of reservoirs. Piecewise multi-factor linear equations were obtained to calculate the statistical probability of glaciers’ total melting in low and high flow years as a function of geographical coordinates and the average altitude of firn boundary.

  20. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  1. Requirements dilemma

    OpenAIRE

    2006-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Knowing ‘what’ to build is an integral part of an Information System Development, and it is generally understood that this, which is known as Requirements, is achievable through a process of understanding, communication and management. It is currently maintained by the Requirements theorists that successful system design clarifies the interrelations between information and its representations...

  2. Optimal Weights of Certain Branches of an Arbitrary Connected Network for Fastest Distributed Consensus Averaging Problem

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest distributed consensus averaging problem over networks with different topologies has been an active area of research for a number of years. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only communicate with their neighbors. In the previous works full knowledge about the network's topology was required for finding optimal weights and convergence rate of network, but here in this work for the first time the optimal weights are determined analytically for the edges of certain types of branches, namely path branch, lollipop branch, semi-complete Branch and Ladder branch independent of the rest of network. The solution procedure consists of stratification of associated connectivity graph of branch and Semidefinite Programming (SDP), particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness c...

  3. Redshift drift in an inhomogeneous universe: averaging and the backreaction conjecture

    CERN Document Server

    Koksbang, S M

    2016-01-01

    An expression for the average redshift drift in a statistically homogeneous and isotropic dust universe is given. The expression takes the same form as the expression for the redshift drift in FLRW models. It is used for a proof-of-principle study of the effects of backreaction on redshift drift measurements by combining the expression with two-region models. The study shows that backreaction can lead to positive redshift drift at low redshifts, exemplifying that a positive redshift drift at low redshifts does not require dark energy. Moreover, the study illustrates that models without a dark energy component can have an average redshift drift observationally indistinguishable from that of the standard model according to the currently expected precision of ELT measurements. In an appendix, spherically symmetric solutions to Einstein's equations with inhomogeneous dark energy and matter are used to study deviations from the average redshift drift and effects of local voids.

  4. Measurement of the average lifetime of b hadrons

    Science.gov (United States)

    Adriani, O.; Aguilar-Benitez, M.; Ahlen, S.; Alcaraz, J.; Aloisio, A.; Alverson, G.; Alviggi, M. G.; Ambrosi, G.; An, Q.; Anderhub, H.; Anderson, A. L.; Andreev, V. P.; Angelescu, T.; Antonov, L.; Antreasyan, D.; Arce, P.; Arefiev, A.; Atamanchuk, A.; Azemoon, T.; Aziz, T.; Baba, P. V. K. S.; Bagnaia, P.; Bakken, J. A.; Ball, R. C.; Banerjee, S.; Bao, J.; Barillère, R.; Barone, L.; Baschirotto, A.; Battiston, R.; Bay, A.; Becattini, F.; Bechtluft, J.; Becker, R.; Becker, U.; Behner, F.; Behrens, J.; Bencze, Gy. L.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biland, A.; Bilei, G. M.; Bizzarri, R.; Blaising, J. J.; Bobbink, G. J.; Bock, R.; Böhm, A.; Borgia, B.; Bosetti, M.; Bourilkov, D.; Bourquin, M.; Boutigny, D.; Bouwens, B.; Brambilla, E.; Branson, J. G.; Brock, I. C.; Brooks, M.; Bujak, A.; Burger, J. D.; Burger, W. J.; Busenitz, J.; Buytenhuijs, A.; Cai, X. D.; Capell, M.; Caria, M.; Carlino, G.; Cartacci, A. M.; Castello, R.; Cerrada, M.; Cesaroni, F.; Chang, Y. H.; Chaturvedi, U. K.; Chemarin, M.; Chen, A.; Chen, C.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chen, M.; Chen, W. Y.; Chiefari, G.; Chien, C. Y.; Choi, M. T.; Chung, S.; Civinini, C.; Clare, I.; Clare, R.; Coan, T. E.; Cohn, H. O.; Coignet, G.; Colino, N.; Contin, A.; Costantini, S.; Cotorobai, F.; Cui, X. T.; Cui, X. Y.; Dai, T. S.; D'Alessandro, R.; de Asmundis, R.; Degré, A.; Deiters, K.; Dénes, E.; Denes, P.; DeNotaristefani, F.; Dhina, M.; DiBitonto, D.; Diemoz, M.; Dimitrov, H. R.; Dionisi, C.; Ditmarr, M.; Djambazov, L.; Dova, M. T.; Drago, E.; Duchesneau, D.; Duinker, P.; Duran, I.; Easo, S.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Erné, F. C.; Extermann, P.; Fabbretti, R.; Fabre, M.; Falciano, S.; Fan, S. J.; Fackler, O.; Fay, J.; Felcini, M.; Ferguson, T.; Fernandez, D.; Fernandez, G.; Ferroni, F.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Forconi, G.; Fredj, L.; Freudenreich, K.; Friebel, W.; Fukushima, M.; Gailloud, M.; Galaktionov, Yu.; Gallo, E.; Ganguli, S. N.; Garcia-Abia, P.; Gele, D.; Gentile, S.; Gheordanescu, N.; Giagu, S.; Goldfarb, S.; Gong, Z. F.; Gonzalez, E.; Gougas, A.; Goujon, D.; Gratta, G.; Gruenewald, M.; Gu, C.; Guanziroli, M.; Guo, J. K.; Gupta, V. K.; Gurtu, A.; Gustafson, H. R.; Gutay, L. J.; Hangarter, K.; Hartmann, B.; Hasan, A.; Hauschildt, D.; He, C. F.; He, J. T.; Hebbeker, T.; Hebert, M.; Hervé, A.; Hilgers, K.; Hofer, H.; Hoorani, H.; Hu, G.; Hu, G. Q.; Ille, B.; Ilyas, M. M.; Innocente, V.; Janssen, H.; Jezequel, S.; Jin, B. N.; Jones, L. W.; Josa-Mutuberria, I.; Kasser, A.; Khan, R. A.; Kamyshkov, Yu.; Kapinos, P.; Kapustinsky, J. S.; Karyotakis, Y.; Kaur, M.; Khokhar, S.; Kienzle-Focacci, M. N.; Kim, J. K.; Kim, S. C.; Kim, Y. G.; Kinnison, W. W.; Kirkby, A.; Kirkby, D.; Kirsch, S.; Kittel, W.; Klimentov, A.; Klöckner, R.; König, A. C.; Koffeman, E.; Kornadt, O.; Koutsenko, V.; Koulbardis, A.; Kraemer, R. W.; Kramer, T.; Krastev, V. R.; Krenz, W.; Krivshich, A.; Kuijten, H.; Kumar, K. S.; Kunin, A.; Landi, G.; Lanske, D.; Lanzano, S.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Lee, D. M.; Lee, J. S.; Lee, K. Y.; Leedom, I.; Leggett, C.; Le Goff, J. M.; Leiste, R.; Lenti, M.; Leonardi, E.; Li, C.; Li, H. T.; Li, P. J.; Liao, J. Y.; Lin, W. T.; Lin, Z. Y.; Linde, F. L.; Lindemann, B.; Lista, L.; Liu, Y.; Lohmann, W.; Longo, E.; Lu, Y. S.; Lubbers, J. M.; Lübelsmeyer, K.; Luci, C.; Luckey, D.; Ludovici, L.; Luminari, L.; Lustermann, W.; Ma, J. M.; Ma, W. G.; MacDermott, M.; Malik, R.; Malinin, A.; Maña, C.; Maolinbay, M.; Marchesini, P.; Marion, F.; Marin, A.; Martin, J. P.; Martinez-Laso, L.; Marzano, F.; Massaro, G. G. G.; Mazumdar, K.; McBride, P.; McMahon, T.; McNally, D.; Merk, M.; Merola, L.; Meschini, M.; Metzger, W. J.; Mi, Y.; Mihul, A.; Mills, G. B.; Mir, Y.; Mirabelli, G.; Mnich, J.; Möller, M.; Monteleoni, B.; Morand, R.; Morganti, S.; Moulai, N. E.; Mount, R.; Müller, S.; Nadtochy, A.; Nagy, E.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Neyer, C.; Niaz, M. A.; Nippe, A.; Nowak, H.; Organtini, G.; Pandoulas, D.; Paoletti, S.; Paolucci, P.; Pascale, G.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pei, Y. J.; Pensotti, S.; Perret-Gallix, D.; Perrier, J.; Pevsner, A.; Piccolo, D.; Pieri, M.; Piroué, P. A.; Plasil, F.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Postema, H.; Qi, Z. D.; Qian, J. M.; Qureshi, K. N.; Raghavan, R.; Rahal-Callot, G.; Rancoita, P. G.; Rattaggi, M.; Raven, G.; Razis, P.; Read, K.; Ren, D.; Ren, Z.; Rescigno, M.; Reucroft, S.; Ricker, A.; Riemann, S.; Riemers, B. C.; Riles, K.; Rind, O.; Rizvi, H. A.; Ro, S.; Rodriguez, F. J.; Roe, B. P.; Röhner, M.; Romero, L.; Rosier-Lees, S.; Rosmalen, R.; Rosselet, Ph.; van Rossum, W.; Roth, S.; Rubbia, A.; Rubio, J. A.; Rykaczewski, H.; Sachwitz, M.; Salicio, J.; Salicio, J. M.; Sanders, G. S.; Santocchia, A.; Sarakinos, M. S.; Sartorelli, G.; Sassowsky, M.; Sauvage, G.; Schegelsky, V.; Schmitz, D.; Schmitz, P.; Schneegans, M.; Schopper, H.; Schotanus, D. J.; Shotkin, S.; Schreiber, H. J.; Shukla, J.; Schulte, R.; Schulte, S.; Schultze, K.; Schwenke, J.; Schwering, G.; Sciacca, C.; Scott, I.; Sehgal, R.; Seiler, P. G.; Sens, J. C.; Servoli, L.; Sheer, I.; Shen, D. Z.; Shevchenko, S.; Shi, X. R.; Shumilov, E.; Shoutko, V.; Son, D.; Sopczak, A.; Soulimov, V.; Spartiotis, C.; Spickermann, T.; Spillantini, P.; Starosta, R.; Steuer, M.; Stickland, D. P.; Sticozzi, F.; Stone, H.; Strauch, K.; Stringfellow, B. C.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Susinno, G. F.; Suter, H.; Swain, J. D.; Syed, A. A.; Tang, X. W.; Taylor, L.; Terzi, G.; Ting, Samuel C. C.; Ting, S. M.; Tonutti, M.; Tonwar, S. C.; Tóth, J.; Tsaregorodtsev, A.; Tsipolitis, G.; Tully, C.; Tung, K. L.; Ulbricht, J.; Urbán, L.; Uwer, U.; Valente, E.; Van de Walle, R. T.; Vetlitsky, I.; Viertel, G.; Vikas, P.; Vikas, U.; Vivargent, M.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Vuilleumier, L.; Wadhwa, M.; Wallraff, W.; Wang, C.; Wang, C. R.; Wang, X. L.; Wang, Y. F.; Wang, Z. M.; Warner, C.; Weber, A.; Weber, J.; Weill, R.; Wenaus, T. J.; Wenninger, J.; White, M.; Willmott, C.; Wittgenstein, F.; Wright, D.; Wu, S. X.; Wynhoff, S.; Wysłouch, B.; Xie, Y. Y.; Xu, J. G.; Xu, Z. Z.; Xue, Z. L.; Yan, D. S.; Yang, B. Z.; Yang, C. G.; Yang, G.; Ye, C. H.; Ye, J. B.; Ye, Q.; Yeh, S. C.; Yin, Z. W.; You, J. M.; Yunus, N.; Yzerman, M.; Zaccardelli, C.; Zaitsev, N.; Zemp, P.; Zeng, M.; Zeng, Y.; Zhang, D. H.; Zhang, Z. P.; Zhou, B.; Zhou, G. J.; Zhou, J. F.; Zhu, R. Y.; Zichichi, A.; van der Zwaan, B. C. C.; L3 Collaboration

    1993-11-01

    The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at √ s ≈ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b hadron lifetime was measured to be τb = (1535 ± 35 ± 28) fs, where the first error is statistical and the second includes both the experimental and the theoretical systematic uncertainties.

  5. Calculations of canonical averages from the grand canonical ensemble.

    Science.gov (United States)

    Kosov, D S; Gelin, M F; Vdovin, A I

    2008-02-01

    Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in the canonical ensemble based on calculations of the expectation values in the grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble.

  6. Averaging in Parametrically Excited Systems – A State Space Formulation

    Directory of Open Access Journals (Sweden)

    Pfau Bastian

    2016-01-01

    Full Text Available Parametric excitation can lead to instabilities as well as to an improved stability behavior, depending on whether a parametric resonance or anti-resonance is induced. In order to calculate the stability domains and boundaries, the method of averaging is applied. The problem is reformulated in state space representation, which allows a general handling of the averaging method especially for systems with non-symmetric system matrices. It is highlighted that this approach can enhance the first order approximation significantly. Two example systems are investigated: a generic mechanical system and a flexible rotor in journal bearings with adjustable geometry.

  7. Generalized Sampling Series Approximation of Random Signals from Local Averages

    Institute of Scientific and Technical Information of China (English)

    SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun

    2007-01-01

    Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.

  8. THEORETICAL CALCULATION OF THE RELATIVISTIC SUBCONFIGURATION-AVERAGED TRANSITION ENERGIES

    Institute of Scientific and Technical Information of China (English)

    张继彦; 杨向东; 杨国洪; 张保汉; 雷安乐; 刘宏杰; 李军

    2001-01-01

    A method for calculating the average energies of relativistic subconfigurations in highly ionized heavy atoms has been developed in the framework of the multiconfigurational Dirac-Fock theory. The method is then used to calculate the average transition energies of the spin-orbit-split 3d-4p transition of Co-like tungsten, the 3d-5f transition of Cu-like tantalum, and the 3d-5f transitions of Cu-like and Zn-like gold samples. The calculated results are in good agreement with those calculated with the relativistic parametric potential method and also with the experimental results.

  9. Quantum state discrimination using the minimum average number of copies

    CERN Document Server

    Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J

    2016-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.

  10. Average quantum dynamics of closed systems over stochastic Hamiltonians

    CERN Document Server

    Yu, Li

    2011-01-01

    We develop a master equation formalism to describe the evolution of the average density matrix of a closed quantum system driven by a stochastic Hamiltonian. The average over random processes generally results in decoherence effects in closed system dynamics, in addition to the usual unitary evolution. We then show that, for an important class of problems in which the Hamiltonian is proportional to a Gaussian random process, the 2nd-order master equation yields exact dynamics. The general formalism is applied to study the examples of a two-level system, two atoms in a stochastic magnetic field and the heating of a trapped ion.

  11. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  12. Precision volume measurement system.

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  13. Update of Ireland's national average indoor radon concentration - Application of a new survey protocol.

    Science.gov (United States)

    Dowdall, A; Murphy, P; Pollard, D; Fenton, D

    2017-04-01

    In 2002, a National Radon Survey (NRS) in Ireland established that the geographically weighted national average indoor radon concentration was 89 Bq m(-3). Since then a number of developments have taken place which are likely to have impacted on the national average radon level. Key among these was the introduction of amending Building Regulations in 1998 requiring radon preventive measures in new buildings in High Radon Areas (HRAs). In 2014, the Irish Government adopted the National Radon Control Strategy (NRCS) for Ireland. A knowledge gap identified in the NRCS was to update the national average for Ireland given the developments since 2002. The updated national average would also be used as a baseline metric to assess the effectiveness of the NRCS over time. A new national survey protocol was required that would measure radon in a sample of homes representative of radon risk and geographical location. The design of the survey protocol took into account that it is not feasible to repeat the 11,319 measurements carried out for the 2002 NRS due to time and resource constraints. However, the existence of that comprehensive survey allowed for a new protocol to be developed, involving measurements carried out in unbiased randomly selected volunteer homes. This paper sets out the development and application of that survey protocol. The results of the 2015 survey showed that the current national average indoor radon concentration for homes in Ireland is 77 Bq m(-3), a decrease from the 89 Bq m(-3) reported in the 2002 NRS. Analysis of the results by build date demonstrate that the introduction of the amending Building Regulations in 1998 have led to a reduction in the average indoor radon level in Ireland. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Hanford analytical services quality assurance requirements documents

    Energy Technology Data Exchange (ETDEWEB)

    Hyatt, J.E.

    1997-09-25

    Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements

  15. Evaluation of Average Wall Thickness of Organically Modified Mesoporous Silica

    Institute of Scientific and Technical Information of China (English)

    Yan Jun GONG; Zhi Hong LI; Bao Zhong DONG

    2005-01-01

    The small angle X-ray scattering of organically modified MSU-X silica prepared by co-condensation of tetraethoxysilane (TEOS) and methyltriethoxysilane (MTES) show negative deviation from Debye's theory due to the existence of the organic interface layer. By exerting correction of the scattering negative deviation, Debye relation may be recovered, and the average wall thickness of the material may be evaluated.

  16. Climate Prediction Center (CPC) Zonally Average 500 MB Temperature Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 500-hPa temperature anomalies averaged over the latitude band 20oN ? 20oS. The anomalies are...

  17. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  18. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of 2004 and the Tax Extenders and Alternative Minimum Tax Relief Act of 2008. The regulations provide...) relating to the averaging of farm and fishing income in computing tax liability. A notice of proposed...

  19. The First Steps with Alexia, the Average Lexicographic Value

    NARCIS (Netherlands)

    Tijs, S.H.

    2005-01-01

    The new value AL for balanced games is discussed, which is based on averaging of lexicographic maxima of the core.Exactifications of games play a special role to find interesting relations of AL with other solution concepts for various classes of games as convex games, big boss games, simplex games

  20. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    Science.gov (United States)

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to