WorldWideScience

Sample records for volume averaging requires

  1. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  2. Cosmological measure with volume averaging and the vacuum energy problem

    Science.gov (United States)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  3. Cosmological measure with volume averaging and the vacuum energy problem

    International Nuclear Information System (INIS)

    Astashenok, Artyom V; Del Popolo, Antonino

    2012-01-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)

  4. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  5. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  6. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  7. Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx

    2008-07-01

    This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)

  8. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  9. Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.

    1977-01-01

    One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated

  10. Non-invasive assessment of distribution volume ratios and binding potential: tissue heterogeneity and interindividually averaged time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)

    2004-04-01

    Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos

  11. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  12. Angular finite volume method for solving the multigroup transport equation with piecewise average scattering cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G., E-mail: ansar.calloo@cea.fr, E-mail: jean-francois.vidal@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: gerald.rimpault@cea.fr [CEA, DEN, DER/SPRC/LEPh, Saint-Paul-lez-Durance (France)

    2011-07-01

    This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S{sub n} method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)

  13. Angular finite volume method for solving the multigroup transport equation with piecewise average scattering cross sections

    International Nuclear Information System (INIS)

    Calloo, A.; Vidal, J.F.; Le Tellier, R.; Rimpault, G.

    2011-01-01

    This paper deals with the solving of the multigroup integro-differential form of the transport equation for fine energy group structure. In that case, multigroup transfer cross sections display strongly peaked shape for light scatterers and the current Legendre polynomial expansion is not well-suited to represent them. Furthermore, even if considering an exact scattering cross sections representation, the scattering source in the discrete ordinates method (also known as the Sn method) being calculated by sampling the angular flux at given directions, may be wrongly computed due to lack of angular support for the angular flux. Hence, following the work of Gerts and Matthews, an angular finite volume solver has been developed for 2D Cartesian geometries. It integrates the multigroup transport equation over discrete volume elements obtained by meshing the unit sphere with a product grid over the polar and azimuthal coordinates and by considering the integrated flux per solid angle element. The convergence of this method has been compared to the S_n method for a highly anisotropic benchmark. Besides, piecewise-average scattering cross sections have been produced for non-bound Hydrogen atoms using a free gas model for thermal neutrons. LWR lattice calculations comparing Legendre representations of the Hydrogen scattering multigroup cross section at various orders and piecewise-average cross sections for this same atom are carried out (while keeping a Legendre representation for all other isotopes). (author)

  14. Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare?

    KAUST Repository

    Davit, Yohan

    2013-12-01

    A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.

  15. Measurement of average density and relative volumes in a dispersed two-phase fluid

    Science.gov (United States)

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  16. The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel

    Science.gov (United States)

    Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin

    2016-06-01

    On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.

  17. Formula for average energy required to produce a secondary electron in an insulator

    International Nuclear Information System (INIS)

    Xie Ai-Gen; Zhan Yu; Gao Zhi-Yong; Wu Hong-Yan

    2013-01-01

    Based on a simple classical model specifying that the primary electrons interact with the electrons of a lattice through the Coulomb force and a conclusion that the lattice scattering can be ignored, the formula for the average energy required to produce a secondary electron (in) is obtained. On the basis of the energy band of an insulator and the formula for in, the formula for the average energy required to produce a secondary electron in an insulator (in i ) is deduced as a function of the width of the forbidden band (E g ) and electron affinity χ. Experimental values and the in i values calculated with the formula are compared, and the results validate the theory that explains the relationships among E g , χ, and in i and suggest that the formula for in i is universal on the condition that the primary electrons at any energy hit the insulator. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  18. Instantaneous equations for multiphase flow in porous media without length-scale restrictions using a non-local averaging volume

    International Nuclear Information System (INIS)

    Espinosa-Paredes, Gilberto

    2010-01-01

    The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.

  19. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    Science.gov (United States)

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  20. SACALCCYL, Calculates the average solid angle subtended by a volume; SACALC2B, Calculates the average solid angle for source-detector geometries

    International Nuclear Information System (INIS)

    Whitcher, Ralph

    2007-01-01

    1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero

  1. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  2. Subsurface Contamination Focus Area technical requirements. Volume 1: Requirements summary

    International Nuclear Information System (INIS)

    Nickelson, D.; Nonte, J.; Richardson, J.

    1996-10-01

    This document summarizes functions and requirements for remediation of source term and plume sites identified by the Subsurface Contamination Focus Area. Included are detailed requirements and supporting information for source term and plume containment, stabilization, retrieval, and selective retrieval remedial activities. This information will be useful both to the decision-makers within the Subsurface Contamination Focus Area (SCFA) and to the technology providers who are developing and demonstrating technologies and systems. Requirements are often expressed as graphs or charts, which reflect the site-specific nature of the functions that must be performed. Many of the tradeoff studies associated with cost savings are identified in the text

  3. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    Science.gov (United States)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  4. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  5. The disk averaged star formation relation for Local Volume dwarf galaxies

    Science.gov (United States)

    López-Sánchez, Á. R.; Lagos, C. D. P.; Young, T.; Jerjen, H.

    2018-05-01

    Spatially resolved H I studies of dwarf galaxies have provided a wealth of precision data. However these high-quality, resolved observations are only possible for handful of dwarf galaxies in the Local Volume. Future H I surveys are unlikely to improve the current situation. We therefore explore a method for estimating the surface density of the atomic gas from global H I parameters, which are conversely widely available. We perform empirical tests using galaxies with resolved H I maps, and find that our approximation produces values for the surface density of atomic hydrogen within typically 0.5 dex of the true value. We apply this method to a sample of 147 galaxies drawn from modern near-infrared stellar photometric surveys. With this sample we confirm a strict correlation between the atomic gas surface density and the star formation rate surface density, that is vertically offset from the Kennicutt-Schmidt relation by a factor of 10 - 30, and significantly steeper than the classical N = 1.4 of Kennicutt (1998). We further infer the molecular fraction in the sample of low surface brightness, predominantly dwarf galaxies by assuming that the star formation relationship with molecular gas observed for spiral galaxies also holds in these galaxies, finding a molecular-to-atomic gas mass fraction within the range of 5-15%. Comparison of the data to available models shows that a model in which the thermal pressure balances the vertical gravitational field captures better the shape of the ΣSFR-Σgas relationship. However, such models fail to reproduce the data completely, suggesting that thermal pressure plays an important role in the disks of dwarf galaxies.

  6. Determination of average fission fraction produced by 14 MeV neutrons in assemblies with large volume of depleted uranium

    International Nuclear Information System (INIS)

    Wang Dalun; Li Benci; Wang Xiuchun; Li Yijun; Zhang Shaohua; He Yongwu

    1991-07-01

    The average fission fraction of 238 U caused by 14 MeV neutrons in assemblies with large volume depleted uranium has been determined. The measured value of p f 238U (R ∞ depleted ) 14 was 0.897 ± 0.036. Measurements were also completed for neutron flux distribution and average fission fraction of 235 U isotope in depleted uranium sphere. Values of p f 238U (R depleted ) have been obtained by using a series of uranium spheres. For a sphere with Φ 600 the p f 23 '8 U (R 300 depleted ) is 0.823 ± 0.041, the density of depleted uranium assembly is 18.8g/cm 3 and total weight of assembly is about 2.8t

  7. Positive outcome of average volume-assured pressure support mode of a Respironics V60 Ventilator in acute exacerbation of chronic obstructive pulmonary disease: a case report

    Directory of Open Access Journals (Sweden)

    Okuda Miyuki

    2012-09-01

    obstructive pulmonary disease was complicated by obstructive sleep apnea syndrome. Conclusion In cases such as this, in which patients with severe acute respiratory failure requiring full-time noninvasive positive pressure ventilation therapy also show sleep-disordered breathing, different ventilator settings must be used for waking and sleeping. On such occasions, the Respironics V60 Ventilator, which is equipped with an average volume-assured pressure support mode, may be useful in improving gas exchange and may achieve good patient compliance, because that mode allows ventilation to be maintained by automatically adjusting the inspiratory force to within an acceptable range whenever ventilation falls below target levels.

  8. Hanford analytical services quality assurance requirements documents. Volume 1: Administrative Requirements

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1997-01-01

    Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements

  9. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes

    Directory of Open Access Journals (Sweden)

    Björn eNitzsche

    2015-06-01

    Full Text Available Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams were acquired on a 1.5T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight, age and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM and white (WM matter as well as cerebrospinal fluid (CSF classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM. Overall, a positive correlation of GM volume and body weight explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.

  10. Low Goiter Rate Associated with Small Average Thyroid Volume in Schoolchildren after the Elimination of Iodine Deficiency Disorders.

    Directory of Open Access Journals (Sweden)

    Peihua Wang

    Full Text Available After the implementation of the universal salt iodization (USI program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD among children in eastern China.This study aimed to examine the correlation of total goiter rate (TGR with average thyroid volume (Tvol and urinary iodine concentration (UIC in Jiangsu province after IDD elimination.Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects and salt iodine (10,184 samples using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA.TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR 300 μg/L was associated with a smaller average Tvol in children.After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.

  11. Effect of the averaging volume and algorithm on the in situ electric field for uniform electric- and magnetic-field exposures

    International Nuclear Information System (INIS)

    Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Kamimura, Yoshitsugu

    2010-01-01

    The present study quantified the volume-averaged in situ electric field in nerve tissues of anatomically based numeric Japanese male and female models for exposure to extremely low-frequency electric and magnetic fields. A quasi-static finite-difference time-domain method was applied to analyze this problem. The motivation of our investigation is that the dependence of the electric field induced in nerve tissue on the averaging volume/distance is not clear, while a cubical volume of 5 x 5 x 5 mm 3 or a straight-line segment of 5 mm is suggested in some documents. The influence of non-nerve tissue surrounding nerve tissue is also discussed by considering three algorithms for calculating the averaged in situ electric field in nerve tissue. The computational results obtained herein reveal that the volume-averaged electric field in the nerve tissue decreases with the averaging volume. In addition, the 99th percentile value of the volume-averaged in situ electric field in nerve tissue is more stable than that of the maximal value for different averaging volume. When including non-nerve tissue surrounding nerve tissue in the averaging volume, the resultant in situ electric fields were not so dependent on the averaging volume as compared to the case excluding non-nerve tissue. In situ electric fields averaged over a distance of 5 mm were comparable or larger than that for a 5 x 5 x 5 mm 3 cube depending on the algorithm, nerve tissue considered and exposure scenarios. (note)

  12. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    Science.gov (United States)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  13. 40 CFR 63.2854 - How do I determine the weighted average volume fraction of HAP in the actual solvent loss?

    Science.gov (United States)

    2010-07-01

    ... volume fraction of HAP in the actual solvent loss? 63.2854 Section 63.2854 Protection of Environment... AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants: Solvent... average volume fraction of HAP in the actual solvent loss? (a) This section describes the information and...

  14. NMR measurement of dynamic nuclear polarization: a technique to test the quality of its volume average obtained with different NMR coil configurations

    International Nuclear Information System (INIS)

    Zhao, W.H.; Cox, S.F.J.

    1980-07-01

    In the NMR measurement of dynamic nuclear polarization, a volume average is obtained where the contribution from different parts of the sample is weighted according to the local intensity of the RF field component perpendicular to the large static field. A method of mapping this quantity is described. A small metallic object whose geometry is chosen to perturb the appropriate RF component is scanned through the region to be occupied by the sample. The response of the phase angle of the impedance of a tuned circuit comprising the NMR coil gives a direct measurement of the local weighting factor. The correlation between theory and experiment was obtained by using a circular coil. The measuring method, checked in this way, was then used to investigate the field profiles of practical coils which are required to be rectangular for a proposed experimental neutron polarizing filter. This method can be used to evaluate other practical RF coils. (author)

  15. European utility requirements (EUR) volume 3 assessment for AP1000

    International Nuclear Information System (INIS)

    Saiu, G.; Demetri, K.J.

    2005-01-01

    The EUR (European Utility Requirements) Volume 3 is intended to report the Plant Description, the Compliance Assessment to EUR Volumes 1 and 2, and finally, the Specific Requirements for each specific Nuclear Power Plant Design considered by the EUR. Five subsets of EUR Volume 3, based on EUR Revision B, are already published; all of which are next generation plant designs being developed for Europe beyond 2000. They include : 1) EP1000 - Passive Pressurized Light Water Reactor (3-Loop, 1000 MWe) 2) EPR - Evolutionary Pressurized Light Water Reactor (1500 MWe) 3) BWR90/90+ - Evolutionary Boiling Water Reactor (1400 MWe) 4) ABWR - Evolutionary Boiling Water Reactor (1400 MWe) 5) SWR 1000 - Boiling Water Reactor With Passive Features (1000 MWe) In addition, the following subsets are currently being developed: 1) AP1000 - Passive Pressurized Light Water Reactor (2-Loop, 1117 MWe) 2) VVER AES 92 - Pressurized Water Reactor With Passive Features (1000 MWe) The purpose of this paper is to provide an overview of the program, which started in January 2004 with the EUR group to prepare an EUR Volume 3 Subset for the AP1000 nuclear plant design. The AP1000 EUR compliance assessment, to be performed against EUR Revision C requirements, is an important step for the evaluation of the AP1000 design for application in Europe. The AP1000 compliance assessment is making full use of AP1000 licensing documentation, EPP Phase 2 design activities and EP1000 EUR detailed compliance assessment. As of today, nearly all of the EUR Chapters have been discussed within the EUR Coordination Group. Based on the results of the compliance assessment, it can be stated that the AP1000 design shows a good level of compliance with the EUR Revision C requirements. Nevertheless, the compliance assessment has highlighted areas for where the AP1000 plant deviates from the EUR. The EPP design group has selected the most significant ones for performing detailed studies to quantify the degree of compliance

  16. The average number of alpha-particle hits to the cell nucleus required to eradicate a tumour cell population

    International Nuclear Information System (INIS)

    Roeske, John C; Stinchcomb, Thomas G

    2006-01-01

    Alpha-particle emitters are currently being considered for the treatment of micrometastatic disease. Based on in vitro studies, it has been speculated that only a few alpha-particle hits to the cell nucleus are considered lethal. However, such estimates do not consider the stochastic variations in the number of alpha-particle hits, energy deposited, or in the cell survival process itself. Using a tumour control probability (TCP) model for alpha-particle emitters, we derive an estimate of the average number of hits to the cell nucleus required to provide a high probability of eradicating a tumour cell population. In simulation studies, our results demonstrate that the average number of hits required to achieve a 90% TCP for 10 4 clonogenic cells ranges from 18 to 108. Those cells that have large cell nuclei, high radiosensitivities and alpha-particle emissions occurring primarily in the nuclei tended to require more hits. As the clinical implementation of alpha-particle emitters is considered, this type of analysis may be useful in interpreting clinical results and in designing treatment strategies to achieve a favourable therapeutic outcome. (note)

  17. Global (volume-averaged) model of inductively coupled chlorine plasma : influence of Cl wall recombination and external heating on continuous and pulse-modulated plasmas

    NARCIS (Netherlands)

    Kemaneci, E.H.; Carbone, E.A.D.; Booth, J.P.; Graef, W.A.A.D.; Dijk, van J.; Kroesen, G.M.W.

    An inductively coupled radio-frequency plasma in chlorine is investigated via a global (volume-averaged) model, both in continuous and square wave modulated power input modes. After the power is switched off (in a pulsed mode) an ion–ion plasma appears. In order to model this phenomenon, a novel

  18. The impact of hospital market structure on patient volume, average length of stay, and the cost of care.

    Science.gov (United States)

    Robinson, J C; Luft, H S

    1985-12-01

    A variety of recent proposals rely heavily on market forces as a means of controlling hospital cost inflation. Sceptics argue, however, that increased competition might lead to cost-increasing acquisitions of specialized clinical services and other forms of non-price competition as means of attracting physicians and patients. Using data from hospitals in 1972 we analyzed the impact of market structure on average hospital costs, measured in terms of both cost per patient and cost per patient day. Under the retrospective reimbursement system in place at the time, hospitals in more competitive environments exhibited significantly higher costs of production than did those in less competitive environments.

  19. Application of method of volume averaging coupled with time resolved PIV to determine transport characteristics of turbulent flows in porous bed

    Science.gov (United States)

    Patil, Vishal; Liburdy, James

    2012-11-01

    Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.

  20. SU-D-213-04: Accounting for Volume Averaging and Material Composition Effects in An Ionization Chamber Array for Patient Specific QA

    International Nuclear Information System (INIS)

    Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K

    2015-01-01

    Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly

  1. Satellite Power Systems (SPS) concept definition study. Volume 6: SPS technology requirements and verification

    Science.gov (United States)

    Hanley, G.

    1978-01-01

    Volume 6 of the SPS Concept Definition Study is presented and also incorporates results of NASA/MSFC in-house effort. This volume includes a supporting research and technology summary. Other volumes of the final report that provide additional detail are as follows: (1) Executive Summary; (2) SPS System Requirements; (3) SPS Concept Evolution; (4) SPS Point Design Definition; (5) Transportation and Operations Analysis; and Volume 7, SPS Program Plan and Economic Analysis.

  2. Subsurface contamination focus area technical requirements. Volume II

    International Nuclear Information System (INIS)

    Nickelson, D.; Nonte, J.; Richardson, J.

    1996-10-01

    This is our vision, a vision that replaces the ad hoc or open-quotes delphiclose quotes method which is to get a group of open-quotes expertsclose quotes together and make decisions based upon opinion. To fulfill our vision for the Subsurface Contaminants Focus Area (SCFA), it is necessary to generate technical requirements or performance measures which are quantitative or measurable. Decisions can be supported if they are based upon requirements or performance measures which can be traced to the origin (documented) and are verifiable, i.e., prove that requirements are satisfied by inspection (show me), demonstration, analysis, monitoring, or test. The data from which these requirements are derived must also reflect the characteristics of individual landfills or plumes so that technologies that meet these requirements will necessarily work at specific sites. Other subjective factors, such as stakeholder concerns, do influence decisions. Using the requirements as a basic approach, the SCFA can depend upon objective criteria to help influence the areas of subjectivity, like the stakeholders. In the past, traceable requirements were not generated, probably because it seemed too difficult to do so. There are risks that the requirements approach will not be accepted because it is new and represents a departure from the historical paradigm

  3. Subsurface contamination focus area technical requirements. Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Nickelson, D.; Nonte, J.; Richardson, J.

    1996-10-01

    This is our vision, a vision that replaces the ad hoc or {open_quotes}delphi{close_quotes} method which is to get a group of {open_quotes}experts{close_quotes} together and make decisions based upon opinion. To fulfill our vision for the Subsurface Contaminants Focus Area (SCFA), it is necessary to generate technical requirements or performance measures which are quantitative or measurable. Decisions can be supported if they are based upon requirements or performance measures which can be traced to the origin (documented) and are verifiable, i.e., prove that requirements are satisfied by inspection (show me), demonstration, analysis, monitoring, or test. The data from which these requirements are derived must also reflect the characteristics of individual landfills or plumes so that technologies that meet these requirements will necessarily work at specific sites. Other subjective factors, such as stakeholder concerns, do influence decisions. Using the requirements as a basic approach, the SCFA can depend upon objective criteria to help influence the areas of subjectivity, like the stakeholders. In the past, traceable requirements were not generated, probably because it seemed too difficult to do so. There are risks that the requirements approach will not be accepted because it is new and represents a departure from the historical paradigm.

  4. Tank Farms Technical Safety Requirements. Volume 1 and 2

    International Nuclear Information System (INIS)

    CASH, R.J.

    2000-01-01

    The Technical Safety Requirements (TSRs) define the acceptable conditions, safe boundaries, basis thereof, and controls to ensure safe operation during authorized activities, for facilities within the scope of the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR)

  5. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  6. Space Tug avionics definition study. Volume 2: Avionics functional requirements

    Science.gov (United States)

    1975-01-01

    Flight and ground operational phases of the tug/shuttle system are analyzed to determine the general avionics support functions that are needed during each of the mission phases and sub-phases. Each of these general support functions is then expanded into specific avionics system requirements, which are then allocated to the appropriate avionics subsystems. This process is then repeated at the next lower level of detail where these subsystem requirements are allocated to each of the major components that comprise a subsystem.

  7. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    Science.gov (United States)

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  8. Improvement of internal tumor volumes of non-small cell lung cancer patients for radiation treatment planning using interpolated average CT in PET/CT.

    Directory of Open Access Journals (Sweden)

    Yao-Ching Wang

    Full Text Available Respiratory motion causes uncertainties in tumor edges on either computed tomography (CT or positron emission tomography (PET images and causes misalignment when registering PET and CT images. This phenomenon may cause radiation oncologists to delineate tumor volume inaccurately in radiotherapy treatment planning. The purpose of this study was to analyze radiology applications using interpolated average CT (IACT as attenuation correction (AC to diminish the occurrence of this scenario. Thirteen non-small cell lung cancer patients were recruited for the present comparison study. Each patient had full-inspiration, full-expiration CT images and free breathing PET images by an integrated PET/CT scan. IACT for AC in PET(IACT was used to reduce the PET/CT misalignment. The standardized uptake value (SUV correction with a low radiation dose was applied, and its tumor volume delineation was compared to those from HCT/PET(HCT. The misalignment between the PET(IACT and IACT was reduced when compared to the difference between PET(HCT and HCT. The range of tumor motion was from 4 to 17 mm in the patient cohort. For HCT and PET(HCT, correction was from 72% to 91%, while for IACT and PET(IACT, correction was from 73% to 93% (*p<0.0001. The maximum and minimum differences in SUVmax were 0.18% and 27.27% for PET(HCT and PET(IACT, respectively. The largest percentage differences in the tumor volumes between HCT/PET and IACT/PET were observed in tumors located in the lowest lobe of the lung. Internal tumor volume defined by functional information using IACT/PET(IACT fusion images for lung cancer would reduce the inaccuracy of tumor delineation in radiation therapy planning.

  9. Space transfer vehicle concepts and requirements, volume 2, book 1

    Science.gov (United States)

    1991-01-01

    The objective of the systems engineering task was to develop and implement an approach that would generate the required study products as defined by program directives. This product list included a set of system and subsystem requirements, a complete set of optimized trade studies and analyses resulting in a recommended system configuration, and the definition of an integrated system/technology and advanced development growth path. A primary ingredient in the approach was the TQM philosophy stressing job quality from the inception. Included throughout the Systems Engineering, Programmatics, Concepts, Flight Design, and Technology sections are data supporting the original objectives as well as supplemental information resulting from program activities. The primary result of the analyses and studies was the recommendation of a single propulsion stage Lunar Transportation System (LTS) configuration that supports several different operations scenarios with minor element changes. This concept has the potential to support two additional scenarios with complex element changes. The space based LTS concept consists of three primary configurations--Piloted, Reusable Cargo, and Expendable Cargo.

  10. Advanced light water reactor utility requirements document: Volume 1--ALWR policy and summary of top-tier requirements

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The U.S. utilities are leading an industry wide effort to establish the technical foundation for the design of the Advanced Light Water Reactor (ALWR). This effort, the ALWR Program, is being managed for the U.S. electric utility industry by the Electric Power Research Institute (EPRI) and includes participation and sponsorship of several international utility companies and close cooperation with the U.S. Department of Energy (DOE). The cornerstone of the ALWR Program is a set of utility design requirements which are contained in the ALWR Requirements Document. The purpose of the Requirement Document is to present a clear, complete statement of utility desires for their next generation of nuclear plants. The Requirements Document covers the entire plant up to the grid interface. It therefore is the basis for an integrated plant design, i.e., nuclear steam supply system and balance of plant, and it emphasizes those areas which are most important to the objective of achieving an ALWR which is excellent with respect to safety, performance, constructibility, and economics. The document applies to both Pressurized Water Reactors (PWRs) and Boiling Water Reactors (BWRs). The Requirements Document is organized in three volumes. Volume 1 summarizes AlWR Program policy statements and top-tier requirements. The top-tier design requirements are categorized by major functions, including safety and investment protection, performance, and design process and constructibility. There is also a set of general design requirements, such as simplification and proven technology, which apply broadly to the ALWR design, and a set of economic goals for the ALWR program. The top-tier design requirements are described further in Volume 1 and are formally invoked as requirements in Volumes 2 and 3

  11. Modelling of an intermediate pressure microwave oxygen discharge reactor: from stationary two-dimensional to time-dependent global (volume-averaged) plasma models

    International Nuclear Information System (INIS)

    Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel

    2015-01-01

    A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)

  12. Effects of Respiration-Averaged Computed Tomography on Positron Emission Tomography/Computed Tomography Quantification and its Potential Impact on Gross Tumor Volume Delineation

    International Nuclear Information System (INIS)

    Chi, Pai-Chun Melinda; Mawlawi, Osama; Luo Dershan; Liao Zhongxing; Macapinlac, Homer A.; Pan Tinsu

    2008-01-01

    Purpose: Patient respiratory motion can cause image artifacts in positron emission tomography (PET) from PET/computed tomography (CT) and change the quantification of PET for thoracic patients. In this study, respiration-averaged CT (ACT) was used to remove the artifacts, and the changes in standardized uptake value (SUV) and gross tumor volume (GTV) were quantified. Methods and Materials: We incorporated the ACT acquisition in a PET/CT session for 216 lung patients, generating two PET/CT data sets for each patient. The first data set (PET HCT /HCT) contained the clinical PET/CT in which PET was attenuation corrected with a helical CT (HCT). The second data set (PET ACT /ACT) contained the PET/CT in which PET was corrected with ACT. We quantified the differences between the two datasets in image alignment, maximum SUV (SUV max ), and GTV contours. Results: Of the patients, 68% demonstrated respiratory artifacts in the PET HCT , and for all patients the artifact was removed or reduced in the corresponding PET ACT . The impact of respiration artifact was the worst for lesions less than 50 cm 3 and located below the dome of the diaphragm. For lesions in this group, the mean SUV max difference, GTV volume change, shift in GTV centroid location, and concordance index were 21%, 154%, 2.4 mm, and 0.61, respectively. Conclusion: This study benchmarked the differences between the PET data with and without artifacts. It is important to pay attention to the potential existence of these artifacts during GTV contouring, as such artifacts may increase the uncertainties in the lesion volume and the centroid location

  13. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-01-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  14. PET imaging of thin objects: measuring the effects of positron range and partial-volume averaging in the leaf of Nicotiana tabacum

    Energy Technology Data Exchange (ETDEWEB)

    Alexoff, David L., E-mail: alexoff@bnl.gov; Dewey, Stephen L.; Vaska, Paul; Krishnamoorthy, Srilalan; Ferrieri, Richard; Schueller, Michael; Schlyer, David J.; Fowler, Joanna S.

    2011-02-15

    Introduction: PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Methods: Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Results: Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean{+-}S.D.) escaping the leaf parenchyma were measured to be 59{+-}1.1%, 64{+-}4.4% and 67{+-}1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. Conclusions: The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.

  15. Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1

    Science.gov (United States)

    Dodson, D. B.

    1975-01-01

    The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.

  16. Design requirements for SRB production control system. Volume 2: System requirements and conceptual description

    Science.gov (United States)

    1981-01-01

    In the development of the business system for the SRB automated production control system, special attention had to be paid to the unique environment posed by the space shuttle. The issues posed by this environment, and the means by which they were addressed, are reviewed. The change in management philosphy which will be required as NASA switches from one-of-a-kind launches to multiple launches is discussed. The implications of the assembly process on the business system are described. These issues include multiple missions, multiple locations and facilities, maintenance and refurbishment, multiple sources, and multiple contractors. The implications of these aspects on the automated production control system are reviewed including an assessment of the six major subsystems, as well as four other subsystem. Some general system requirements which flow through the entire business system are described.

  17. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    International Nuclear Information System (INIS)

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-01-01

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle 3 ), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  18. The Gift Code User Manual. Volume I. Introduction and Input Requirements

    Science.gov (United States)

    1975-07-01

    REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called

  19. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    Science.gov (United States)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  20. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect

    Energy Technology Data Exchange (ETDEWEB)

    Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)

    2016-05-15

    Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to

  1. Functional Requirements for Onboard Management of Space Shuttle Consumables. Volume 2

    Science.gov (United States)

    Graf, P. J.; Herwig, H. A.; Neel, L. W.

    1973-01-01

    This report documents the results of the study "Functional Requirements for Onboard Management of Space Shuttle Consumables." The study was conducted for the Mission Planning and Analysis Division of the NASA Lyndon B. Johnson Space Center, Houston, Texas, between 3 July 1972 and 16 November 1973. The overall study program objective was two-fold. The first objective was to define a generalized consumable management concept which is applicable to advanced spacecraft. The second objective was to develop a specific consumables management concept for the Space Shuttle vehicle and to generate the functional requirements for the onboard portion of that concept. Consumables management is the process of controlling or influencing the usage of expendable materials involved in vehicle subsystem operation. The report consists of two volumes. Volume I presents a description of the study activities related to general approaches for developing consumable management, concepts for advanced spacecraft applications, and functional requirements for a Shuttle consumables management concept. Volume II presents a detailed description of the onboard consumables management concept proposed for use on the Space Shuttle.

  2. Clinical validity of the estimated energy requirement and the average protein requirement for nutritional status change and wound healing in older patients with pressure ulcers: A multicenter prospective cohort study.

    Science.gov (United States)

    Iizaka, Shinji; Kaitani, Toshiko; Nakagami, Gojiro; Sugama, Junko; Sanada, Hiromi

    2015-11-01

    Adequate nutritional intake is essential for pressure ulcer healing. Recently, the estimated energy requirement (30 kcal/kg) and the average protein requirement (0.95 g/kg) necessary to maintain metabolic balance have been reported. The purpose was to evaluate the clinical validity of these requirements in older hospitalized patients with pressure ulcers by assessing nutritional status and wound healing. This multicenter prospective study carried out as a secondary analysis of a clinical trial included 194 patients with pressure ulcers aged ≥65 years from 29 institutions. Nutritional status including anthropometry and biochemical tests, and wound status by a structured severity tool, were evaluated over 3 weeks. Energy and protein intake were determined from medical records on a typical day and dichotomized by meeting the estimated average requirement. Longitudinal data were analyzed with a multivariate mixed-effects model. Meeting the energy requirement was associated with changes in weight (P clinically validated for prevention of nutritional decline and of impaired healing of deep pressure ulcers. © 2014 Japan Geriatrics Society.

  3. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 5

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 5) outlines the standards and requirements for the Fire Protection and Packaging and Transportation sections

  4. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations.

  5. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety.

  6. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction.

  7. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 2

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction

  8. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations

  9. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety

  10. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    Science.gov (United States)

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; Pmen to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (Pphysical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  11. The Partial Molar Volume and Thermal Expansivity of Fe2O3 in Alkali Silicate Liquids: Evidence for the Average Coordination of Fe3+

    Science.gov (United States)

    Liu, Q.; Lange, R.

    2003-12-01

    Ferric iron is an important component in magmatic liquids, especially in those formed at subduction zones. Although it has long been known that Fe3+ occurs in four-, five- and six-fold coordination in crystalline compounds, only recently have all three Fe3+ coordination sites been confirmed in silicate glasses utilizing XANES spectroscopy at the Fe K-edge (Farges et al., 2003). Because the density of a magmatic liquid is largely determined by the geometrical packing of its network-forming cations (e.g., Si4+, Al3+, Ti4+, and Fe3+), the capacity of Fe3+ to undergo composition-induced coordination change affects the partial molar volume of the Fe2O3 component, which must be known to calculate how the ferric-ferrous ratio in magmatic liquids changes with pressure. Previous work has shown that the partial molar volume of Fe2O3 (VFe2O3) varies between calcic vs. sodic silicate melts (Mo et al., 1982; Dingwell and Brearley, 1988; Dingwell et al., 1988). The purpose of this study is to extend the data set in order to search for systematic variations in VFe2O3 with melt composition. High temperature (867-1534° C) density measurements were performed on eleven liquids in the Na2O-Fe2O3-FeO-SiO2 (NFS) system and five liquids in the K2O-Fe2O3-FeO-SiO2 (KFS) system using Pt double-bob Archimedean method. The ferric-ferrous ratio in the sodic and potassic liquids at each temperature of density measurement were calculated from the experimentally calibrated models of Lange and Carmichael (1989) and Tangeman et al. (2001) respectively. Compositions range (in mol%) from 4-18 Fe2O3, 0-3 FeO, 12-39 Na2O, 25-37 K2O, and 43-78 SiO2. Our density data are consistent with those of Dingwell et al. (1988) on similar sodic liquids. Our results indicate that for all five KFS liquids and for eight of eleven NFS liquids, the partial molar volume of the Fe2O3 component is a constant (41.57 ñ 0.14 cm3/mol) and exhibits zero thermal expansivity (similar to that for the SiO2 component). This value

  12. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  13. Relationship between partial and average atomic volumes of components in Au-Ni alloys%Au-Ni合金中组元的平均原子体积和偏摩尔体积的关系

    Institute of Scientific and Technical Information of China (English)

    谢佑卿

    2011-01-01

    在系统合金科学框架中建立有关无序合金的平均摩尔性质(体积和势能)的函数.通过对这些函数进行推导,可以得到平均摩尔体积函数、偏摩尔体积函数及派生出与成分相关的函数.在组元的偏摩尔性质和平均摩尔性质之间的普适方程、差分方程、在偏摩尔性质和平均摩尔性质之间不同参数的约束方程和普适的Gibbs-Duhem公式.可以证明从合金平均摩尔性质的不同函数计算的偏摩尔性质是相等的,但总体来说偏摩尔性质不等于给定组元的平均摩尔性质,即偏摩尔性质不能代表相应组元的摩尔性质.通过计算Au-Ni系中组元的偏摩尔体积和平均原子体积以及合金的平均原子体积,证明所建立的公式和函数的正确性.%In the framework of systematic science of alloys,the average molar property (volume and potential energy) functions of disordered alloys were established.From these functions,the average molar property functions,partial molar property functions,derivative functions with respect to composition,general equation of relationship between partial and average molar properties of components,difference equation and constraining equation of different values between partial and average molar properties,as well as general Gibbs-Duhem formula were derived.It was proved that the partial molar properties calculated from various combinative functions of average molar properties of alloys are equal,but in general,the partial molar properties are not equal to the average molar properties of a given component.This means that the partial molar properties cannot represent the corresponding properties of the component.All the equations and functions established in this work were proved to be correct by calculating the results of partial and average atomic volumes of components as well as average atomic volumes of alloys in the Au-Ni system.

  14. Development of an Inline Dry Power Inhaler That Requires Low Air Volume.

    Science.gov (United States)

    Farkas, Dale; Hindle, Michael; Longest, P Worth

    2017-12-20

    Inline dry powder inhalers (DPIs) are actuated by an external air source and have distinct advantages for delivering aerosols to infants and children, and to individuals with compromised lung function or who require ventilator support. However, current inline DPIs either perform poorly, are difficult to operate, and/or require large volumes (∼1 L) of air. The objective of this study was to develop and characterize a new inline DPI for aerosolizing spray-dried formulations with powder masses of 10 mg and higher using a dispersion air volume of 10 mL per actuation that is easy to load (capsule-based) and operate. Primary features of the new low air volume (LV) DPIs are fixed hollow capillaries that both pierce the capsule and provide a continuous flow path for air and aerosol passing through the device. Two different configurations were evaluated, which were a straight-through (ST) device, with the inlet and outlet capillaries on opposite ends of the capsule, and a single-sided (SS) device, with both the inlet and outlet capillaries on the same side of the capsule. The devices were operated with five actuations of a 10 mL air syringe using an albuterol sulfate (AS) excipient-enhanced growth (EEG) formulation. Device emptying and aerosol characteristics were evaluated for multiple device outlet configurations. Each device had specific advantages. The best case ST device produced the smallest aerosol [mean mass median aerodynamic diameter (MMAD) = 1.57 μm; fine particle fraction <5 μm (FPF <5μm ) = 95.2%)] but the mean emitted dose (ED) was 61.9%. The best case SS device improved ED (84.8%), but produced a larger aerosol (MMAD = 2.13 μm; FPF <5μm  = 89.3%) that was marginally higher than the initial deaggregation target. The new LV-DPIs produced an acceptable high-quality aerosol with only 10 mL of dispersion air per actuation and were easy to load and operate. This performance should enable application in high and low flow

  15. 77 FR 28281 - Withdrawal of Revocation of TSCA Section 4 Testing Requirements for One High Production Volume...

    Science.gov (United States)

    2012-05-14

    ... Withdrawal of Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical...]amino]- (CAS No. 1324-76-1), also known as C.I. Pigment Blue 61. EPA received an adverse comment regarding C.I. Pigment Blue 61. This document withdraws the revocation of testing requirements for C.I...

  16. Geostatistical approach for assessing soil volumes requiring remediation: validation using lead-polluted soils underlying a former smelting works.

    Science.gov (United States)

    Demougeot-Renard, Helene; De Fouquet, Chantal

    2004-10-01

    Assessing the volume of soil requiring remediation and the accuracy of this assessment constitutes an essential step in polluted site management. If this remediation volume is not properly assessed, misclassification may lead both to environmental risks (polluted soils may not be remediated) and financial risks (unexpected discovery of polluted soils may generate additional remediation costs). To minimize such risks, this paper proposes a geostatistical methodology based on stochastic simulations that allows the remediation volume and the uncertainty to be assessed using investigation data. The methodology thoroughly reproduces the conditions in which the soils are classified and extracted at the remediation stage. The validity of the approach is tested by applying it on the data collected during the investigation phase of a former lead smelting works and by comparing the results with the volume that has actually been remediated. This real remediated volume was composed of all the remediation units that were classified as polluted after systematic sampling and analysis during clean-up stage. The volume estimated from the 75 samples collected during site investigation slightly overestimates (5.3% relative error) the remediated volume deduced from 212 remediation units. Furthermore, the real volume falls within the range of uncertainty predicted using the proposed methodology.

  17. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  18. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  19. Caltrans Average Annual Daily Traffic Volumes (2004)

    Data.gov (United States)

    California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...

  20. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  1. Fusion Engineering Device. Volume V. Technology R and D requirements for construction

    International Nuclear Information System (INIS)

    1981-10-01

    This volume covers the following areas: (1) nuclear systems, (2) auxiliary heating, (3) magnet systems, (4) remote maintenance, (5) fueling, (6) diagnostics, instrumentation, information and control, and (7) safety and environment

  2. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    International Nuclear Information System (INIS)

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-01-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean ± standard deviation of 32 ± 9 vs. 23 ± 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 ± 3 vs. 21 ± 5 min (p = .003), 39 ± 12 vs. 30 ± 5 min (p = .055), and 29 ± 5 vs. 20 ± 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

  3. Fusion Engineering Device. Volume IV. Physics basis and physics R and D requirements

    International Nuclear Information System (INIS)

    1981-10-01

    This volume covers the following issues: (1) confinement scaling, (2) cross section shaping, limits on B and q, (3) ion cyclotron heating, (4) neutral beam heating, (5) mechanical pump limiter, (6) poloidal divertor, and (7) non-divertor active impurity control

  4. Geothermal power development in Hawaii. Volume II. Infrastructure and community-services requirements, Island of Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, G.A.; Buevens, W.R.

    1982-06-01

    The requirements of infrastructure and community services necessary to accommodate the development of geothermal energy on the Island of Hawaii for electricity production are identified. The following aspects are covered: Puna District-1981, labor resources, geothermal development scenarios, geothermal land use, the impact of geothermal development on Puna, labor resource requirments, and the requirements for government activity.

  5. Design requirements for SRB production control system. Volume 1: Study background and overview

    Science.gov (United States)

    1981-01-01

    The solid rocket boosters assembly environment is described in terms of the contraints it places upon an automated production control system. The business system generated for the SRB assembly and the computer system which meets the business system requirements are described. The selection software process and modifications required to the recommended software are addressed as well as the hardware and configuration requirements necessary to support the system.

  6. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 9: Functional requirements

    Science.gov (United States)

    1980-01-01

    The current system and subsystem used by the Identification Division are described. System constraints that dictate the system environment are discussed and boundaries within which solutions must be found are described. The functional requirements were related to the performance requirements. These performance requirements were then related to their applicable subsystems. The flow of data, documents, or other pieces of information from one subsystem to another or from the external world into the identification system is described. Requirements and design standards for a computer based system are presented.

  7. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  8. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 7. Revision 1

    International Nuclear Information System (INIS)

    Burt, D.L.

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection

  9. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 7. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Burt, D.L.

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 7) presents the standards and requirements for the following sections: Occupational Safety and Health, and Environmental Protection.

  10. Space station needs, attributes and architectural options study. Volume 3: Mission requirements

    Science.gov (United States)

    1983-04-01

    User missions that are enabled or enhanced by a manned space station are identified. The mission capability requirements imposed on the space station by these users are delineated. The accommodation facilities, equipment, and functional requirements necessary to achieve these capabilities are identified, and the economic, performance, and social benefits which accrue from the space station are defined.

  11. Spacelab Level 4 Programmatic Implementation Assessment Study. Volume 2: Ground Processing requirements

    Science.gov (United States)

    1978-01-01

    Alternate ground processing options are summarized, including installation and test requirements for payloads, space processing, combined astronomy, and life sciences. The level 4 integration resource requirements are also reviewed for: personnel, temporary relocation, transportation, ground support equipment, and Spacelab flight hardware.

  12. Transaction-based building controls framework, Volume 2: Platform descriptive model and requirements

    Energy Technology Data Exchange (ETDEWEB)

    Akyol, Bora A. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Haack, Jereme N. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Carpenter, Brandon J. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Hernandez, George [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)

    2015-07-31

    Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.

  13. Spacelab user implementation assessment study. (Software requirements analysis). Volume 2: Technical report

    Science.gov (United States)

    1976-01-01

    The engineering analyses and evaluation studies conducted for the Software Requirements Analysis are discussed. Included are the development of the study data base, synthesis of implementation approaches for software required by both mandatory onboard computer services and command/control functions, and identification and implementation of software for ground processing activities.

  14. RELAP5/MOD3 code manual: User's guide and input requirements. Volume 2

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation

  15. Station set requirements document. Volume 82: Fire support. Book 2: Preliminary functional fire plan

    Science.gov (United States)

    Gray, N. C.

    1974-01-01

    The fire prevention/protection requirements for all shuttle facility and ground support equipment are presented for the hazardous operations. These include: preparing the orbiter for launch, launch operations, landing operations, safing operations, and associated off-line activities.

  16. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    Science.gov (United States)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  17. Mission requirements for a manned earth observatory. Task 2: Reference mission definition and analyiss, volume 2

    Science.gov (United States)

    1973-01-01

    The mission requirements and conceptual design of manned earth observatory payloads for the 1980 time period are discussed. Projections of 1980 sensor technology and user data requirements were used to formulate typical basic criteria pertaining to experiments, sensor complements, and reference missions. The subjects discussed are: (1) mission selection and prioritization, (2) baseline mission analysis, (3) earth observation data handling and contingency plans, and (4) analysis of low cost mission definition and rationale.

  18. Space station automation study: Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.

  19. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  20. 40 CFR 799.5085 - Chemical testing requirements for certain high production volume chemicals.

    Science.gov (United States)

    2010-07-01

    ... Ave., NW., Washington, DC or at the National Archives and Records Administration (NARA). For...) Method B: ASTM E 1147 (liquid chromatography) Method C: 40 CFR 799.6756 (generator column) 5. Water... (generator column) n-Octanol/water Partition Coefficient or log Kow:Which method is required, if any, is...

  1. Functional requirements for onboard management of space shuttle consumables, volume 1

    Science.gov (United States)

    Graf, P. J.; Herwig, H. A.; Neel, L. W.

    1973-01-01

    A study was conducted to determine the functional requirements for onboard management of space shuttle consumables. A generalized consumable management concept was developed for application to advanced spacecraft. The subsystems and related consumables selected for inclusion in the consumables management system are: (1) propulsion, (2) power generation, and (3) environmental and life support.

  2. Space station automation study. Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The two manufacturing concepts developed represent innovative, technologically advanced manufacturing schemes. The concepts were selected to facilitate an in depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, and artificial intelligence. While the cost effectiveness of these facilities has not been analyzed as part of this study, both appear entirely feasible for the year 2000 timeframe. The growing demand for high quality gallium arsenide microelectronics may warrant the ventures.

  3. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  4. AAFE man-made noise experiment project. Volume 1: Introduction experiment definition and requirements

    Science.gov (United States)

    1974-01-01

    An experiment was conducted to measure and map the man-made radio frequency emanations which exist at earth orbital altitudes. The major objectives of the program are to develop a complete conceptual experiment and developmental hardware for the collection and processing of data required to produce meaningful statistics on man-made noise level variations as functions of time, frequency, and geographic location. A wide dispersion measurement receiver mounted in a spacecraft operating in a specialized orbit is used to obtain the data. A summary of the experiment designs goals and constraints is provided. The recommended orbit for the spacecraft is defined. The characteristics of the receiver and the antennas are analyzed.

  5. Space Station Furnace Facility. Volume 1: Requirements definition and conceptual design study, executive summary

    Science.gov (United States)

    1992-05-01

    The Space Station Freedom Furnace (SSFF) Study was awarded on June 2, 1989, to Teledyne Brown Engineering (TBE) to define an advanced facility for materials research in the microgravity environment of Space Station Freedom (SSF). The SSFF will be designed for research in the solidification of metals and alloys, the crystal growth of electronic and electro-optical materials, and research in glasses and ceramics. The SSFF is one of the first 'facility' class payloads planned by the Microgravity Science and Applications Division (MSAD) of the Office of Space Science and Applications of NASA Headquarters. This facility is planned for early deployment during man-tended operations of the SSF with continuing operations through the Permanently Manned Configuration (PMC). The SSFF will be built around a general 'Core' facility which provides common support functions not provided by SSF, common subsystems which are best centralized, and common subsystems which are best distributed with each experiment module. The intent of the facility approach is to reduce the overall cost associated with implementing and operating a variety of experiments. This is achieved by reducing the launch mass and simplifying the hardware development and qualification processes associated with each experiment. The Core will remain on orbit and will require only periodic maintenance and upgrading while new Furnace Modules, samples, and consumables are developed, qualified, and transported to the SSF. The SSFF Study was divided into two phases: phase 1, a definition study phase, and phase 2, a design and development phase. The definition phase 1 is addressed. Phase 1 was divided into two parts. In the first part, the basic part of the effort, covered the preliminary definition and assessment of requirements; conceptual design of the SSFF; fabrication of mockups; and the preparation for and support of the Conceptual Design Review (CoDR). The second part, the option part, covered requirements update and

  6. Corporate Data Network (CDN) data requirements task. Enterprise Model. Volume 1

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol. 2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol. 3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  7. Corporate Data Network (CDN). Data Requirements Task. Preliminary Strategic Data Plan. Volume 4

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol.2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol.3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  8. Numerical investigation on critical heat flux and coolant volume required for transpiration cooling with phase change

    International Nuclear Information System (INIS)

    He, Fei; Wang, Jianhua

    2014-01-01

    Highlights: • Five states during the transpiration cooling are discussed. • A suit of applicable program is developed. • The variations of the thickness of two-phase region and the pressure are analyzed. • The relationship between heat flux and coolant mass flow rate is presented. • An approach is given to define the desired case of transpiration cooling. - Abstract: The mechanism of transpiration cooling with liquid phase change is numerically investigated to protect the thermal structure exposed to extremely high heat flux. According to the results of theoretical analysis, there is a lower critical and an upper critical external heat flux corresponding a certain coolant mass flow rate, between the two critical values, the phase change of liquid coolant occurs within porous structure. A strongly applicable self-edit program is developed to solve the states of fluid flow and heat transfer probably occurring during the phase change procedure. The distributions of temperature and saturation in these states are presented. The variations of the thickness of two-phase region and the pressure including capillary are analyzed, and capillary pressure is found to be the main factor causing pressure change. From the relationships between the external heat flux and coolant mass flow rate obtained at different cooling cases, an approach is given to estimate the maximal heat flux afforded and the minimal coolant consumption required by the desired case of transpiration cooling. Thus the pressure and coolant consumption required in a certain thermal circumstance can be determined, which are important in the practical application of transpiration cooling

  9. Southwest Project: resource/institutional requirements analysis. Volume II. Technical studies

    Energy Technology Data Exchange (ETDEWEB)

    Ormsby, L. S.; Sawyer, T. G.; Brown, Dr., M. L.; Daviet, II, L. L.; Weber, E. R.; Brown, J. E.; Arlidge, J. W.; Novak, H. R.; Sanesi, Norman; Klaiman, H. C.; Spangenberg, Jr., D. T.; Groves, D. J.; Maddox, J. D.; Hayslip, R. M.; Ijams, G.; Lacy, R. G.; Montgomery, J.; Carito, J. A.; Ballance, J. W.; Bluemle, C. F.; Smith, D. N.; Wehrey, M. C.; Ladd, K. L.; Evans, Dr., S. K.; Guild, D. H.; Brodfeld, B.; Cleveland, J. A.; Hicks, K. L.; Noga, M. W.; Ross, A. M.

    1979-12-01

    The project provides information which could be used to accelerate the commercialization and market penetration of solar electric generation plants in the southwestern region of the United States. The area of concern includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The project evaluated the potential integration of solar electric generating facilities into the existing electric grids of the region through the year 2000. The technologies included wind energy conversion, solar thermal electric, solar photovoltaic conversion, and hybrid solar electric systems. Each of the technologies considered, except hybrid solar electric, was paired with a compatible energy storage system to improve plant performance and enhance applicability to a utility grid system. The hybrid concept utilizes a conventionally-fueled steam generator as well as a solar steam generator so it is not as dependent upon the availability of solar energy as are the other concepts. Operation of solar electric generating plants in conjunction with existing hydroelectric power facilities was also studied. The participants included 12 electric utility companies and a state power authority in the southwestern US, as well as a major consulting engineering firm. An assessment of the state-of-the-art of solar electric generating plants from an electric utility standpoint; identification of the electric utility industry's technical requirements and considerations for solar electric generating plants; estimation of the capital investment, operation, and maintenance costs for solar electric generating plants; and determination of the capital investment of conventional fossil and nuclear electric generating plants are presented. (MCW)

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  11. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  12. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  13. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  14. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  15. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  16. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  17. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  18. Multiple-level defect species evaluation from average carrier decay

    Science.gov (United States)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  19. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  20. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  1. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  2. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  3. Autonomous Integrated Receive System (AIRS) requirements definition. Volume 4: Functional specification for the prototype Automated Integrated Receive System (AIRS)

    Science.gov (United States)

    Chie, C. M.

    1984-01-01

    The functional requirements for the performance, design, and testing for the prototype Automated Integrated Receive System (AIRS) to be demonstrated for the TDRSS S-Band Single Access Return Link are presented.

  4. C-Band Airport Surface Communications System Standards Development. Phase II Final Report. Volume 1: Concepts of Use, Initial System Requirements, Architecture, and AeroMACS Design Considerations

    Science.gov (United States)

    Hall, Edward; Isaacs, James; Henriksen, Steve; Zelkin, Natalie

    2011-01-01

    This report is provided as part of ITT s NASA Glenn Research Center Aerospace Communication Systems Technical Support (ACSTS) contract NNC05CA85C, Task 7: New ATM Requirements-Future Communications, C-Band and L-Band Communications Standard Development and was based on direction provided by FAA project-level agreements for New ATM Requirements-Future Communications. Task 7 included two subtasks. Subtask 7-1 addressed C-band (5091- to 5150-MHz) airport surface data communications standards development, systems engineering, test bed and prototype development, and tests and demonstrations to establish operational capability for the Aeronautical Mobile Airport Communications System (AeroMACS). Subtask 7-2 focused on systems engineering and development support of the L-band digital aeronautical communications system (L-DACS). Subtask 7-1 consisted of two phases. Phase I included development of AeroMACS concepts of use, requirements, architecture, and initial high-level safety risk assessment. Phase II builds on Phase I results and is presented in two volumes. Volume I (this document) is devoted to concepts of use, system requirements, and architecture, including AeroMACS design considerations. Volume II describes an AeroMACS prototype evaluation and presents final AeroMACS recommendations. This report also describes airport categorization and channelization methodologies. The purposes of the airport categorization task were (1) to facilitate initial AeroMACS architecture designs and enable budgetary projections by creating a set of airport categories based on common airport characteristics and design objectives, and (2) to offer high-level guidance to potential AeroMACS technology and policy development sponsors and service providers. A channelization plan methodology was developed because a common global methodology is needed to assure seamless interoperability among diverse AeroMACS services potentially supplied by multiple service providers.

  5. Solar Heating And Cooling Of Buildings (SHACOB): Requirements definition and impact analysis-2. Volume 3: Customer load management systems

    Science.gov (United States)

    Cretcher, C. K.; Rountredd, R. C.

    1980-11-01

    Customer Load Management Systems, using off-peak storage and control at the residences, are analyzed to determine their potential for capacity and energy savings by the electric utility. Areas broadly representative of utilities in the regions around Washington, DC and Albuquerque, NM were of interest. Near optimum tank volumes were determined for both service areas, and charging duration/off-time were identified as having the greatest influence on tank performance. The impacts on utility operations and corresponding utility/customer economics were determined in terms of delta demands used to estimate the utilities' generating capacity differences between the conventional load management, (CLM) direct solar with load management (DSLM), and electric resistive systems. Energy differences are also determined. These capacity and energy deltas are translated into changes in utility costs due to penetration of the CLM or DSLM systems into electric resistive markets in the snapshot years of 1990 and 2000.

  6. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  7. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  8. Development of flight experiment task requirements. Volume 2: Technical Report. Part 2: Appendix H: Tasks-skills data series

    Science.gov (United States)

    Hatterick, G. R.

    1972-01-01

    The data sheets presented contain the results of the task analysis portion of the study to identify skill requirements of space shuttle crew personnel. A comprehensive data base is provided of crew functions, operating environments, task dependencies, and task-skills applicable to a representative cross section of earth orbital research experiments.

  9. 77 FR 28340 - Revocation of TSCA Section 4 Testing Requirements for One High Production Volume Chemical Substance

    Science.gov (United States)

    2012-05-14

    ... data on C.I. Pigment Blue 61 found the study on mammalian acute toxicity and the bacterial mutation..., water solubility, biodegradation, fish acute toxicity, mammalian acute toxicity, bacterial reverse... toxicity with a reproduction/developmental toxicity screen. Studies responding to those test requirements...

  10. A Forecast of Army Aviation Training Research and Development Requirements for the Period 1985 to 2000. Volume 1

    Science.gov (United States)

    1981-08-01

    cinematic simulation, and interactive computer- 97 control display devices have been reviewed by Roscoe. [35] Developments on some of these training...R. S., Study and Analysis of Requirements for Head-Up Display (HUD), NASA CR-6612, March 1970. 6. Burnette, K. T., "The Status of Human Perceptual

  11. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  12. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  13. Development of optimized techniques and requirements for computer enhancement of structural weld radiographs. Volume 1: Technical report

    Science.gov (United States)

    Adams, J. R.; Hawley, S. W.; Peterson, G. R.; Salinger, S. S.; Workman, R. A.

    1971-01-01

    A hardware and software specification covering requirements for the computer enhancement of structural weld radiographs was considered. Three scanning systems were used to digitize more than 15 weld radiographs. The performance of these systems was evaluated by determining modulation transfer functions and noise characteristics. Enhancement techniques were developed and applied to the digitized radiographs. The scanning parameters of spot size and spacing and film density were studied to optimize the information content of the digital representation of the image.

  14. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  15. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  16. Sample size requirements for one-year treatment effects using deep gray matter volume from 3T MRI in progressive forms of multiple sclerosis.

    Science.gov (United States)

    Kim, Gloria; Chu, Renxin; Yousuf, Fawad; Tauhid, Shahamat; Stazzone, Lynn; Houtchens, Maria K; Stankiewicz, James M; Severson, Christopher; Kimbrough, Dorlan; Quintana, Francisco J; Chitnis, Tanuja; Weiner, Howard L; Healy, Brian C; Bakshi, Rohit

    2017-11-01

    The subcortical deep gray matter (DGM) develops selective, progressive, and clinically relevant atrophy in progressive forms of multiple sclerosis (PMS). This patient population is the target of active neurotherapeutic development, requiring the availability of outcome measures. We tested a fully automated MRI analysis pipeline to assess DGM atrophy in PMS. Consistent 3D T1-weighted high-resolution 3T brain MRI was obtained over one year in 19 consecutive patients with PMS [15 secondary progressive, 4 primary progressive, 53% women, age (mean±SD) 50.8±8.0 years, Expanded Disability Status Scale (median, range) 5.0, 2.0-6.5)]. DGM segmentation applied the fully automated FSL-FIRST pipeline ( http://fsl.fmrib.ox.ac.uk ). Total DGM volume was the sum of the caudate, putamen, globus pallidus, and thalamus. On-study change was calculated using a random-effects linear regression model. We detected one-year decreases in raw [mean (95% confidence interval): -0.749 ml (-1.455, -0.043), p = 0.039] and annualized [-0.754 ml/year (-1.492, -0.016), p = 0.046] total DGM volumes. A treatment trial for an intervention that would show a 50% reduction in DGM brain atrophy would require a sample size of 123 patients for a single-arm study (one-year run-in followed by one-year on-treatment). For a two-arm placebo-controlled one-year study, 242 patients would be required per arm. The use of DGM fraction required more patients. The thalamus, putamen, and globus pallidus, showed smaller effect sizes in their on-study changes than the total DGM; however, for the caudate, the effect sizes were somewhat larger. DGM atrophy may prove efficient as a short-term outcome for proof-of-concept neurotherapeutic trials in PMS.

  17. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  18. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  19. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  20. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  1. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  2. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  3. Late Noachian fluvial erosion on Mars: Cumulative water volumes required to carve the valley networks and grain size of bed-sediment

    Science.gov (United States)

    Rosenberg, Eliott N.; Head, James W., III

    2015-11-01

    Our goal is to quantify the cumulative water volume that was required to carve the Late Noachian valley networks on Mars. We employ an improved methodology in which fluid/sediment flux ratios are based on empirical data, not assumed. We use a large quantity of data from terrestrial rivers to assess the variability of actual fluid/sediment flux sediment ratios. We find the flow depth by using an empirical relationship to estimate the fluid flux from the estimated channel width, and then using estimated grain sizes (theoretical sediment grain size predictions and comparison with observations by the Curiosity rover) to find the flow depth to which the resulting fluid flux corresponds. Assuming that the valley networks contained alluvial bed rivers, we find, from their current slopes and widths, that the onset of suspended transport occurs near the sand-gravel boundary. Thus, any bed sediment must have been fine gravel or coarser, whereas fine sediment would be carried downstream. Subsequent to the cessation of fluvial activity, aeolian processes have partially redistributed fine-grain particles in the valleys, often forming dunes. It seems likely that the dominant bed sediment size was near the threshold for suspension, and assuming that this was the case could make our final results underestimates, which is the same tendency that our other assumptions have. Making this assumption, we find a global equivalent layer (GEL) of 3-100 m of water to be the most probable cumulative volume that passed through the valley networks. This value is similar to the ∼34 m water GEL currently on the surface and in the near-surface in the form of ice. Note that the amount of water required to carve the valley networks could represent the same water recycled through a surface valley network hydrological system many times in separate or continuous precipitation/runoff/collection/evaporation/precipitation cycles.

  4. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  5. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  6. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  7. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  8. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  9. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  10. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  11. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  12. Proposed average values of some engineering properties of palm

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... ture resistance, toughness, deformation, and hardness of palm ... useful in the determination of the amount of heat re- quired for ... to measure the volume required for the investigation. A result of .... vanized steel, and glass.

  13. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  14. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  15. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  16. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  17. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  18. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  19. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    International Nuclear Information System (INIS)

    Neau, E.L.

    1994-01-01

    Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2

  20. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  1. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  2. Task Analysis and Descriptions of Required Job Competencies for Robotics/Automated Systems Technicians. Final Report. Volume 2. Curriculum Planning Guide.

    Science.gov (United States)

    Hull, Daniel M.; Lovett, James E.

    This volume of the final report for the Robotics/Automated Systems Technician (RAST) curriculum project is a curriculum planning guide intended for school administrators, faculty, and student counselors/advisors. It includes step-by-step procedures to help institutions evaluate their community's needs and their capabilities to meet these needs in…

  3. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  4. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  5. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  6. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  7. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  8. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  9. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  10. Optimal bounds and extremal trajectories for time averages in dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  11. Orbital transfer vehicle concept definition and system analysis study. Volume 2: OTV concept definition and evaluation. Book 1: Mission and system requirements

    Science.gov (United States)

    Kofal, Allen E.

    1987-01-01

    The mission and system requirements for the concept definition and system analysis of the Orbital Transfer Vehicle (OTV) are established. The requirements set forth constitute the single authority for the selection, evaluation, and optimization of the technical performance and design of the OTV. This requirements document forms the basis for the Ground and Space Based OTV concept definition analyses and establishes the physical, functional, performance and design relationships to STS, Space Station, Orbital Maneuvering Vehicle (OMV), and payloads.

  12. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  14. State policies and requirements for management of uranium mining and milling in New Mexico. Volume V. State policy needs for community impact assistance

    International Nuclear Information System (INIS)

    Vandevender, S.G.

    1980-04-01

    The report contained in this volume describes a program for management of the community impacts resulting from the growth of uranium mining and milling in New Mexico. The report, submitted to Sandia Laboratories by the New Mexico Department of Energy and Minerals, is reproduced without modification. The state recommends that federal funding and assistance be provided to implement a growth management program comprised of these seven components: (1) an early warning system, (2) a community planning and technical assistance capability, (3) flexible financing, (4) a growth monitoring system, (5) manpower training, (6) economic diversification planning, and (7) new technology testing

  15. Defense Manpower Commission Staff Studies and Supporting Papers. Volume 2. The Total Force and Its Manpower Requirements Including Overviews of Each Service

    Science.gov (United States)

    1976-05-01

    J^^’.Si*!** \\ ir..’’T^-.’-T*TSfn titoa i iMBi’M, OTTŕ" ,^~" fraCk k^«^;-<^»J,..;^.a,L.^t»^^ri^fc ft WBMyLmH’.’JW*^Hi,.J , Jl,l|llliln|Kli|Pffl...also develop a means to inspect tube internals to insure they are clean 11. Develop a deep tank, high volume, high head hydraulic driven pump to...and procedures that have been implemented at this Command and havs en- hanced productivity are: - Power floor cleaners Pneumatic/ hydraulic ram tire

  16. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  17. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  18. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  19. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  20. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  1. A Safety and Health Guide for Vocational Educators. Incorporating Requirements of the Occupational Safety and Health Act of 1970, Relevant Pennsylvania Requirements with Particular Emphasis for Those Concerned with Cooperative Education and Work Study Programs. Volume 15. Number 1.

    Science.gov (United States)

    Wahl, Ray

    Intended as a guide for vocational educators to incorporate the requirements of the Occupational Safety and Health Act (1970) and the requirements of various Pennsylvania safety and health regulations with their cooperative vocational programs, the first chapter of this document presents the legal implications of these safety and health…

  2. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  3. Status and habitat requirements of the white sturgeon populations in the Columbia River downstream from McNary Dam. Volume 2: Supplemental papers and data documentation; Final report

    International Nuclear Information System (INIS)

    Beamesderfer, R.C.; Nigro, A.A.

    1995-01-01

    This is the final report for research on white sturgeon Acipenser transmontanus from 1986--92 and conducted by the National Marine Fisheries Service (NMFS), Oregon Department of Fish and Wildlife (ODFW), US Fish and Wildlife Service (USFWS), and Washington Department of Fisheries (WDF). Findings are presented as a series of papers, each detailing objectives, methods, results, and conclusions for a portion of this research. This volume includes supplemental papers which provide background information needed to support results of the primary investigations addressed in Volume 1. This study addresses measure 903(e)(1) of the Northwest Power Planning Council's 1987 Fish and Wildlife Program that calls for ''research to determine the impact of development and operation of the hydropower system on sturgeon in the Columbia River Basin.'' Study objectives correspond to those of the ''White Sturgeon Research Program Implementation Plan'' developed by BPA and approved by the Northwest Power Planning Council in 1985. Work was conducted on the Columbia River from McNary Dam to the estuary

  4. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  5. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  6. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  7. Notes on Well-Posed, Ensemble Averaged Conservation Equations for Multiphase, Multi-Component, and Multi-Material Flows

    International Nuclear Information System (INIS)

    Ray A. Berry

    2005-01-01

    At the INL researchers and engineers routinely encounter multiphase, multi-component, and/or multi-material flows. Some examples include: Reactor coolant flows Molten corium flows Dynamic compaction of metal powders Spray forming and thermal plasma spraying Plasma quench reactor Subsurface flows, particularly in the vadose zone Internal flows within fuel cells Black liquor atomization and combustion Wheat-chaff classification in combine harvesters Generation IV pebble bed, high temperature gas reactor The complexity of these flows dictates that they be examined in an averaged sense. Typically one would begin with known (or at least postulated) microscopic flow relations that hold on the ''small'' scale. These include continuum level conservation of mass, balance of species mass and momentum, conservation of energy, and a statement of the second law of thermodynamics often in the form of an entropy inequality (such as the Clausius-Duhem inequality). The averaged or macroscopic conservation equations and entropy inequalities are then obtained from the microscopic equations through suitable averaging procedures. At this stage a stronger form of the second law may also be postulated for the mixture of phases or materials. To render the evolutionary material flow balance system unique, constitutive equations and phase or material interaction relations are introduced from experimental observation, or by postulation, through strict enforcement of the constraints or restrictions resulting from the averaged entropy inequalities. These averaged equations form the governing equation system for the dynamic evolution of these mixture flows. Most commonly, the averaging technique utilized is either volume or time averaging or a combination of the two. The flow restrictions required for volume and time averaging to be valid can be severe, and violations of these restrictions are often found. A more general, less restrictive (and far less commonly used) type of averaging known as

  8. Report of the US Nuclear Regulatory Commission Piping Review Committee. Volume 2. Evaluation of seismic designs: a review of seismic design requirements for Nuclear Power Plant Piping

    Energy Technology Data Exchange (ETDEWEB)

    1985-04-01

    This document reports the position and recommendations of the NRC Piping Review Committee, Task Group on Seismic Design. The Task Group considered overlapping conservation in the various steps of seismic design, the effects of using two levels of earthquake as a design criterion, and current industry practices. Issues such as damping values, spectra modification, multiple response spectra methods, nozzle and support design, design margins, inelastic piping response, and the use of snubbers are addressed. Effects of current regulatory requirements for piping design are evaluated, and recommendations for immediate licensing action, changes in existing requirements, and research programs are presented. Additional background information and suggestions given by consultants are also presented.

  9. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  10. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  11. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  12. Graphics Standards in the Computer-aided Acquisition and Logistic Support (CALS) Program, Fiscal Year 1989. Volume 1. Test Requirements Document and Extended CGM (CGEM)

    Science.gov (United States)

    1990-07-01

    at the firet meng in Munich, it was foyud isthere exis•s some avedap between the stated goals of• • DG Standards and t•e CGt e SG con ded t in cumin ... stressing importance of liaison and harmonization with closely related groups; Japan - making suggestions that the NWI and Requirements Document be improved

  13. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  15. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  16. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  18. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  20. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  2. 40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?

    Science.gov (United States)

    2010-07-01

    ... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...

  3. Life sciences payload definition and integration study. Volume 2: Requirements, design, and planning studies for the carry-on laboratories. [for Spacelab

    Science.gov (United States)

    1974-01-01

    The task phase concerned with the requirements, design, and planning studies for the carry-on laboratory (COL) began with a definition of biomedical research areas and candidate research equipment, and then went on to develop conceptual layouts for COL which were each evaluated in order to arrive at a final conceptual design. Each step in this design/evaluation process concerned itself with man/systems integration research and hardware, and life support and protective systems research and equipment selection. COL integration studies were also conducted and include attention to electrical power and data management requirements, operational considerations, and shuttle/Spacelab interface specifications. A COL program schedule was compiled, and a cost analysis was finalized which takes into account work breakdown, annual funding, and cost reduction guidelines.

  4. State policies and requirements for management of uranium mining and milling in New Mexico. Volume II. Water availability in the San Juan Structural Basin

    International Nuclear Information System (INIS)

    Vandevender, S.G.

    1980-04-01

    This volume contains Two parts: Part One is an analysis of an issue paper prepared by the office of the New Mexico State Engineer on water availability for uranium production. Part Two is the issue paper itself. The State Engineer's report raises the issue of a scarce water supply in the San Juan Structural Basin acting as a constraint on the growth of the uranium mining and milling industry in New Mexico. The water issue in the structural basin is becoming an acute policy issue because of the uranium industry's importance to and rapid growth within the structural basin. Its growth places heavy demands on the region's scarce water supply. The impact of mine dewatering on water supply is of particular concern. Much of the groundwater has been appropriated or applied for. The State Engineer is currently basing water rights decisions upon data which he believes to be inadequate to determine water quality and availability in the basin. He, along with the USGS and the State Bureau of Mines and Mineral Resources, recommends a well drilling program to acquire the additional information about the groundwater characteristics of the basin. The information would be used to provide input data for a computer model, which is used as one of the bases for decisions concerning water rights and water use in the basin. The recommendation is that the appropriate DOE office enter into discussions with the New Mexico State Engineer to explore the potential mutual benefits of a well drilling program to determine the water availability in the San Juan Structural Basin

  5. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Determination of clothing microclimate volume

    NARCIS (Netherlands)

    Daanen, Hein; Hatcher, Kent; Havenith, George

    2005-01-01

    The average air layer thickness between human skin and clothing is an important factor in heat transfer. The trapped volume between skin and clothing is an estimator for everage air layer thickness. Several techniques are available to determine trapped volume. This study investigates the reliability

  7. Power Extension Package (PEP) system definition extension, orbital service module systems analysis study. Volume 7: PEP logistics and training plan requirements

    Science.gov (United States)

    1979-01-01

    Recommendations for logistics activities and logistics planning are presented based on the assumption that a system prime contractor will perform logistics functions to support all program hardware and will implement a logistics system to include the planning and provision of products and services to assure cost effective coverage of the following: maintainability; maintenance; spares and supply support; fuels; pressurants and fluids; operations and maintenance documentation training; preservation, packaging and packing; transportation and handling; storage; and logistics management information reporting. The training courses, manpower, materials, and training aids required will be identified and implemented in a training program.

  8. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  9. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  10. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  11. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  13. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  14. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  15. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  16. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  18. Design study of RL10 derivatives. Volume 3, part 2: Operational and flight support plan. [analysis of transportation requirements for rocket engine in support of space tug program

    Science.gov (United States)

    Shubert, W. C.

    1973-01-01

    Transportation requirements are considered during the engine design layout reviews and maintenance engineering analyses. Where designs cannot be influenced to avoid transportation problems, the transportation representative is advised of the problems permitting remedies early in the program. The transportation representative will monitor and be involved in the shipment of development engine and GSE hardware between FRDC and vehicle manufacturing plant and thereby will be provided an early evaluation of the transportation plans, methods and procedures to be used in the space tug support program. Unanticipated problems discovered in the shipment of development hardware will be known early enough to permit changes in packaging designs and transportation plans before the start of production hardware and engine shipments. All conventional transport media can be used for the movement of space tug engines. However, truck transport is recommended for ready availability, variety of routes, short transit time, and low cost.

  19. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  20. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  1. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  2. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  3. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  4. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  5. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  6. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  7. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  8. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  9. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  10. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  11. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  12. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  13. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  14. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  15. Survey mirrors and lenses and their required surface accuracy. Volume 1. Technical report. Final report for September 15, 1978-December 1, 1979

    Energy Technology Data Exchange (ETDEWEB)

    Beesing, M. E.; Buchholz, R. L.; Evans, R. A.; Jaminski, R. W.; Mathur, A. K.; Rausch, R. A.; Scarborough, S.; Smith, G. A.; Waldhauer, D. J.

    1980-01-01

    An investigation of the optical performance of a variety of concentrating solar collectors is reported. The study addresses two important issues: the accuracy of reflective or refractive surfaces required to achieve specified performance goals, and the effect of environmental exposure on the performance concentrators. To assess the importance of surface accuracy on optical performance, 11 tracking and nontracking concentrator designs were selected for detailed evaluation. Mathematical models were developed for each design and incorporated into a Monte Carlo ray trace computer program to carry out detailed calculations. Results for the 11 concentrators are presented in graphic form. The models and computer program are provided along with a user's manual. A survey data base was established on the effect of environmental exposure on the optical degradation of mirrors and lenses. Information on environmental and maintenance effects was found to be insufficient to permit specific recommendations for operating and maintenance procedures, but the available information is compiled and reported and does contain procedures that other workers have found useful.

  16. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  17. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  18. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  19. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  20. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  1. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  2. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  3. CARWASH WASTEWATERS: CHARACTERISTICS, VOLUMES, AND TREATABILITY BY GRAVITY OIL SEPARATION

    OpenAIRE

    C. Fall

    2007-01-01

    The aim of this research was to determine the characteristics, volumes and treatability of Full-service carwash wastewaters in Toluca (Mexico State). The average water use for Exterior-only wash was 50 L per small-size car and 170 L per medium-size vehicle (pick up, van or light truck). The Full-service wash (exterior, engine and chassis) required 170 L per small-size car and 300 L per light truck. Wastewaters were generally emulsified and contained high contaminant loads (in average, 1100 mg...

  4. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  5. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  6. Average properties of bidisperse bubbly flows

    Science.gov (United States)

    Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.

    2018-03-01

    Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.

  7. Ovarian volume throughout life

    DEFF Research Database (Denmark)

    Kelsey, Thomas W; Dodwell, Sarah K; Wilkinson, A Graham

    2013-01-01

    conception to 82 years of age. This model shows that 69% of the variation in ovarian volume is due to age alone. We have shown that in the average case ovarian volume rises from 0.7 mL (95% CI 0.4-1.1 mL) at 2 years of age to a peak of 7.7 mL (95% CI 6.5-9.2 mL) at 20 years of age with a subsequent decline...... to about 2.8 mL (95% CI 2.7-2.9 mL) at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis......The measurement of ovarian volume has been shown to be a useful indirect indicator of the ovarian reserve in women of reproductive age, in the diagnosis and management of a number of disorders of puberty and adult reproductive function, and is under investigation as a screening tool for ovarian...

  8. Physics-preserving averaging scheme based on Grunwald-Letnikov formula for gas flow in fractured media

    KAUST Repository

    Amir, Sahar Z.

    2018-01-02

    The heterogeneous natures of rock fabrics, due to the existence of multi-scale fractures and geological formations, led to the deviations from unity in the flux-equations fractional-exponent magnitudes. In this paper, the resulting non-Newtonian non-Darcy fractional-derivatives flux equations are solved using physics-preserving averaging schemes that incorporates both, original and shifted, Grunwald-Letnikov (GL) approximation formulas preserving the physics, by reducing the shifting effects, while maintaining the stability of the system, by keeping one shifted expansion. The proposed way of using the GL expansions also generate symmetrical coefficient matrices that significantly reduces the discretization complexities appearing with all shifted cases from literature, and help considerably in 2D and 3D systems. Systems equations derivations and discretization details are discussed. Then, the physics-preserving averaging scheme is explained and illustrated. Finally, results are presented and reviewed. Edge-based original GL expansions are unstable as also illustrated in literatures. Shifted GL expansions are stable but add a lot of additional weights to both discretization sides affecting the physical accuracy. In comparison, the physics-preserving averaging scheme balances the physical accuracy and stability requirements leading to a more physically conservative scheme that is more stable than the original GL approximation but might be slightly less stable than the shifted GL approximations. It is a locally conservative Single-Continuum averaging scheme that applies a finite-volume viewpoint.

  9. Physics-preserving averaging scheme based on Grunwald-Letnikov formula for gas flow in fractured media

    KAUST Repository

    Amir, Sahar Z.; Sun, Shuyu

    2018-01-01

    The heterogeneous natures of rock fabrics, due to the existence of multi-scale fractures and geological formations, led to the deviations from unity in the flux-equations fractional-exponent magnitudes. In this paper, the resulting non-Newtonian non-Darcy fractional-derivatives flux equations are solved using physics-preserving averaging schemes that incorporates both, original and shifted, Grunwald-Letnikov (GL) approximation formulas preserving the physics, by reducing the shifting effects, while maintaining the stability of the system, by keeping one shifted expansion. The proposed way of using the GL expansions also generate symmetrical coefficient matrices that significantly reduces the discretization complexities appearing with all shifted cases from literature, and help considerably in 2D and 3D systems. Systems equations derivations and discretization details are discussed. Then, the physics-preserving averaging scheme is explained and illustrated. Finally, results are presented and reviewed. Edge-based original GL expansions are unstable as also illustrated in literatures. Shifted GL expansions are stable but add a lot of additional weights to both discretization sides affecting the physical accuracy. In comparison, the physics-preserving averaging scheme balances the physical accuracy and stability requirements leading to a more physically conservative scheme that is more stable than the original GL approximation but might be slightly less stable than the shifted GL approximations. It is a locally conservative Single-Continuum averaging scheme that applies a finite-volume viewpoint.

  10. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  11. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  12. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  13. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  14. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  15. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  16. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  17. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  18. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  19. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  20. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  1. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  3. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  4. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  5. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  6. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  7. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  8. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  9. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  10. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  11. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  12. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  13. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  14. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  15. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  16. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  17. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  18. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  19. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  20. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  1. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time....

  2. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  3. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  4. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  5. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  6. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  7. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  8. Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare?

    KAUST Repository

    Davit, Yohan; Bell, Christopher G.; Byrne, Helen M.; Chapman, Lloyd A.C.; Kimpton, Laura S.; Lang, Georgina E.; Leonard, Katherine H.L.; Oliver, James M.; Pearson, Natalie C.; Shipley, Rebecca J.; Waters, Sarah L.; Whiteley, Jonathan P.; Wood, Brian D.; Quintard, Michel

    2013-01-01

    doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points

  9. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    DEFF Research Database (Denmark)

    Meyer Forsting, Alexander Raul; Troldborg, Niels; Borraccino, Antoine

    2017-01-01

    gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination...

  10. MR renography : An algorithm for calculation and correction of cortical volume averaging in medullary renographs

    NARCIS (Netherlands)

    de Priester, JA; den Boer, JA; Giele, ELW; Christiaans, MHL; Kessels, A; Hasman, A; van Engelshoven, JMA

    We evaluated a mathematical algorithm for the generation of medullary signal from raw dynamic magnetic resonance (MR) data. Five healthy volunteers were studied. MR examination consisted of a run of 100 TI-weighted coronal scans (gradient echo: TR/TE 11/3.4 msec, flip angle 60 degrees; slice

  11. Reachable volume RRT

    KAUST Repository

    McMahon, Troy

    2015-05-01

    © 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.

  12. Reachable volume RRT

    KAUST Repository

    McMahon, Troy; Thomas, Shawna; Amato, Nancy M.

    2015-01-01

    © 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.

  13. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    International Nuclear Information System (INIS)

    Ingram, A.; Golovchak, R.; Kostrzewa, M.; Wacke, S.; Shpotyuk, M.; Shpotyuk, O.

    2012-01-01

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  14. Compositional dependences of average positron lifetime in binary As-S/Se glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)

    2012-02-15

    Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.

  15. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...

  16. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  17. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  18. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  19. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  20. Averaged description of 3D MHD equilibrium

    International Nuclear Information System (INIS)

    Medvedev, S.Yu.; Drozdov, V.V.; Ivanov, A.A.; Martynov, A.A.; Pashekhonov, Yu.Yu.; Mikhailov, M.I.

    2001-01-01

    A general approach by S.A.Galkin et al. in 1991 to 2D description of MHD equilibrium and stability in 3D systems was proposed. The method requires a background 3D equilibrium with nested flux surfaces to generate the metric of a Riemannian space in which the background equilibrium is described by the 2D equation of Grad-Shafranov type. The equation can be solved then varying plasma profiles and shape to get approximate 3D equilibria. In the framework of the method both planar axis conventional stellarators and configurations with spatial magnetic axis can be studied. In the present report the formulation and numerical realization of the equilibrium problem for stellarators with planar axis is reviewed. The input background equilibria with nested flux surfaces are taken from vacuum magnetic field approximately described by analytic scalar potential

  1. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    International Nuclear Information System (INIS)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu

    2008-01-01

    Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACT from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma (γ) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass γ criteria

  2. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  3. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    International Nuclear Information System (INIS)

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  4. Waste Management System Requirement document

    International Nuclear Information System (INIS)

    1990-04-01

    This volume defines the top level technical requirements for the Monitored Retrievable Storage (MRS) facility. It is designed to be used in conjunction with Volume 1, General System Requirements. Volume 3 provides a functional description expanding the requirements allocated to the MRS facility in Volume 1 and, when appropriate, elaborates on requirements by providing associated performance criteria. Volumes 1 and 3 together convey a minimum set of requirements that must be satisfied by the final MRS facility design without unduly constraining individual design efforts. The requirements are derived from the Nuclear Waste Policy Act of 1982 (NWPA), the Nuclear Waste Policy Amendments Act of 1987 (NWPAA), the Environmental Protection Agency's (EPA) Environmental Standards for the Management and Disposal of Spent Nuclear Fuel (40 CFR 191), NRC Licensing Requirements for the Independent Storage of Spent Nuclear and High-Level Radioactive Waste (10 CFR 72), and other federal statutory and regulatory requirements, and major program policy decisions. This document sets forth specific requirements that will be fulfilled. Each subsequent level of the technical document hierarchy will be significantly more detailed and provide further guidance and definition as to how each of these requirements will be implemented in the design. Requirements appearing in Volume 3 are traceable into the MRS Design Requirements Document. Section 2 of this volume provides a functional breakdown for the MRS facility. 1 tab

  5. Design of respiration averaged CT for attenuation correction of the PET data from PET/CT

    International Nuclear Information System (INIS)

    Chi, Pai-Chun Melinda; Mawlawi, Osama; Nehmeh, Sadek A.; Erdi, Yusuf E.; Balter, Peter A.; Luo, Dershan; Mohan, Radhe; Pan Tinsu

    2007-01-01

    Our previous patient studies have shown that the use of respiration averaged computed tomography (ACT) for attenuation correction of the positron emission tomography (PET) data from PET/CT reduces the potential misalignment in the thorax region by matching the temporal resolution of the CT to that of the PET. In the present work, we investigated other approaches of acquiring ACT in order to reduce the CT dose and to improve the ease of clinical implementation. Four-dimensional CT (4DCT) data sets for ten patients (17 lung/esophageal tumors) were acquired in the thoracic region immediately after the routine PET/CT scan. For each patient, multiple sets of ACTs were generated based on both phase image averaging (phase approach) and fixed cine duration image averaging (cine approach). In the phase approach, the ACTs were calculated from CT images corresponding to the significant phases of the respiratory cycle: ACT 050phs from end-inspiration (0%) and end-expiration (50%), ACT 2070phs from mid-inspiration (20%) and mid-expiration (70%), ACT 4phs from 0%, 20%, 50% and 70%, and ACT 10phs from all ten phases, which was the original approach. In the cine approach, which does not require 4DCT, the ACTs were calculated based on the cine images from cine durations of 1 to 6 s at 1 s increments. PET emission data for each patient were attenuation corrected with each of the above mentioned ACTs and the tumor maximum standard uptake value (SUV max ), average SUV (SUV avg ), and tumor volume measurements were compared. Percent differences were calculated between PET data corrected with various ACTs and that corrected with ACT 10phs . In the phase approach, the ACT 10phs can be approximated by the ACT 4phs to within a mean percent difference of 2% in SUV and tumor volume measurements. In cine approach, ACT 10phs can be approximated to within a mean percent difference of 3% by ACTs computed from cine durations ≥3 s. Acquiring CT images only at the four significant phases for the

  6. Evaluation of a new software tool for the automatic volume calculation of hepatic tumors. First results

    International Nuclear Information System (INIS)

    Meier, S.; Mildenberger, P.; Pitton, M.; Thelen, M.; Schenk, A.; Bourquain, H.

    2004-01-01

    Purpose: computed tomography has become the preferred method in detecting liver carcinomas. The introduction of spiral CT added volumetric assessment of intrahepatic tumors, which was unattainable in the clinical routine with incremental CT due to complex planimetric revisions and excessive computing time. In an ongoing clinical study, a new software tool was tested for the automatic detection of tumor volume and the time needed for this procedure. Materials and methods: we analyzed patients suffering from hepatocellular carcinoma (HCC). All patients underwent treatment with repeated transcatheter chemoembolization of the hepatic arteria. The volumes of the HCC lesions detected in CT were measured with the new software tool in HepaVison (MeVis, Germany). The results were compared with manual planimetric calculation of the volume performed by three independent radiologists. Results: our first results in 16 patients show a correlation between the automatically and the manually calculated volumes (up to a difference of 2 ml) of 96.8%. While the manual method of analyzing the volume of a lesion requires 2.5 minutes on average, the automatic method merely requires about 30 seconds of user interaction time. Conclusion: These preliminary results show a good correlation between automatic and manual calculations of the tumor volume. The new software tool requires less time for accurate determination of the tumor volume and can be applied in the daily clinical routine. (orig.) [de

  7. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  8. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  9. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  10. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  11. The dependence of prostate postimplant dosimetric quality on CT volume determination

    International Nuclear Information System (INIS)

    Merrick, Gregory S.; Butler, Wayne M.; Dorsey, Anthony T.; Lief, Jonathan H.

    1999-01-01

    Purpose: The postoperative evaluation of permanent prostate brachytherapy requires a subjective determination of the implant volume. This work investigates the magnitude of the effect that various methods of treatment volume delineation have on dosimetric quality parameters for a treatment planning philosophy that defines a target volume as the prostate with a periprostatic margin. Methods and Materials: Eight consecutive prostate brachytherapy patients with a prescribed dose of 145 Gy from 125 I as monotherapy comprised the study population. The prostate ultrasound volume was enlarged to a planning volume by an average factor of 1.8 to encompass probable extracapsular extension in the periprostatic region. For this cohort, the mean pretreatment parameters were 30.3 cm 3 ultrasound volume, 51.8 cm 3 planning volume, 131 seeds per patient, and 42.9 mCi total activity. On CT study sets obtained less than 2 hours postoperatively, target volumes were drawn using three methods: prostate plus a periprostatic margin, prostate only which excluded the puborectalis muscles, the periprostatic fat and the periprostatic venous plexus, and the preplanning ultrasound magnified to conform to the magnification factor of the postimplant CT scan. Three sets of 5 dosimetric quality parameters corresponding to the different volumetric approaches were calculated: V100, V150, and V200 which are the fractions of the target volume covered by 100, 150, and 200% of the prescribed dose, and D90 and D100, which are the minimal doses covering 90 and 100% of the target volume. Results: The postoperative CT volume utilizing the prostate plus margin technique was comparable to the initial planning volume (mean 55.5 cm 3 vs. 51.8 cm 3 , respectively) whereas those determined via superimposing the preplan ultrasound resulted in volumes nearly identical to the initial ultrasound evaluation (mean 32.4 cm 3 vs. 30.3 cm 3 ). The prostate only approach resulted in volumes approximately 25% larger than

  12. Technical support for the Ohio Clean Coal Technology Program. Volume 2, Baseline of knowledge concerning process modification opportunities, research needs, by-product market potential, and regulatory requirements: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Olfenbuttel, R.; Clark, S.; Helper, E.; Hinchee, R.; Kuntz, C.; Means, J.; Oxley, J.; Paisley, M.; Rogers, C.; Sheppard, W.; Smolak, L. [Battelle, Columbus, OH (United States)

    1989-08-28

    This report was prepared for the Ohio Coal Development Office (OCDO) under Grant Agreement No. CDO/R-88-LR1 and comprises two volumes. Volume 1 presents data on the chemical, physical, and leaching characteristics of by-products from a wide variety of clean coal combustion processes. Volume 2 consists of a discussion of (a) process modification waste minimization opportunities and stabilization considerations; (b) research and development needs and issues relating to clean coal combustion technologies and by-products; (c) the market potential for reusing or recycling by-product materials; and (d) regulatory considerations relating to by-product disposal or reuse.

  13. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  14. High-average-power laser medium based on silica glass

    Science.gov (United States)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    Silica glass is one of the most attractive materials for a high-average-power laser. We have developed a new laser material base don silica glass with zeolite method which is effective for uniform dispersion of rare earth ions in silica glass. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action. As the main reason of bubbling is due to hydroxy species remained in the gelation same, we carefully choose colloidal silica particles, pH value of hydrochloric acid for hydrolysis of tetraethylorthosilicate on sol-gel process, and temperature and atmosphere control during sintering process, and then we get a bubble less transparent rare earth doped silica glass. The refractive index distortion of the sample also discussed.

  15. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  16. Comparison of Statistically Modeled Contaminated Soil Volume Estimates and Actual Excavation Volumes at the Maywood FUSRAP Site - 13555

    Energy Technology Data Exchange (ETDEWEB)

    Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States)

    2013-07-01

    As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)

  17. Expressing intrinsic volumes as rotational integrals

    DEFF Research Database (Denmark)

    Auneau, Jeremy Michel; Jensen, Eva Bjørn Vedel

    2010-01-01

    A new rotational formula of Crofton type is derived for intrinsic volumes of a compact subset of positive reach. The formula provides a functional defined on the section of X with a j-dimensional linear subspace with rotational average equal to the intrinsic volumes of X. Simplified forms of the ...

  18. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  19. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  20. Design of a micro-irrigation system based on the control volume method

    Directory of Open Access Journals (Sweden)

    Chasseriaux G.

    2006-01-01

    Full Text Available A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fl uid hydrodynamicʼs are applied. Control volume method is an iterative method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average fl ow rate of the network. The design consists of exploring an economical and effi cient network to deliver uniformly the input fl ow rate for all emitters. This program permitted the design of a large complex network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for lateral design, the results from which were validated by other methods as fi nite element method, so it permits to determine the optimal design for such micro-irrigation network

  1. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  2. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  3. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  4. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  5. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  6. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  7. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  8. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  9. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  10. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  11. State policies and requirements for management of uranium mining and milling in New Mexico. Volume IV. The supply of electric power and natural gas fuel as possible constraints on uranium production

    International Nuclear Information System (INIS)

    Page, G.B.

    1980-04-01

    The report contained in this volume considers the availability of electric power to supply uranium mines and mills. The report, submited to Sandia Laboratories by the New Mexico Department of Energy and Minerals (EMD), is reproduced without modification. The state concludes that the supply of power, including natural gas-fueled production, will not constrain uranium production

  12. Long-term outcomes of anterior spinal fusion for treating thoracic adolescent idiopathic scoliosis curves: average 15-year follow-up analysis.

    Science.gov (United States)

    Sudo, Hideki; Ito, Manabu; Kaneda, Kiyoshi; Shono, Yasuhiro; Takahata, Masahiko; Abumi, Kuniyoshi

    2013-05-01

    Retrospective review. To assess the long-term outcomes of anterior spinal fusion (ASF) for treating thoracic adolescent idiopathic scoliosis (AIS). Although ASF is reported to provide good coronal and sagittal correction of the main thoracic (MT) AIS curves, the long-term outcomes of ASF is unknown. A consecutive series of 25 patients with Lenke 1 MT AIS were included. Outcome measures comprised radiographical measurements, pulmonary function, and Scoliosis Research Society outcome instrument (SRS-30) scores (preoperative SRS-30 scores were not documented). Postoperative surgical revisions and complications were recorded. Twenty-five patients were followed-up for 12 to 18 years (average, 15.2 yr). The average MT Cobb angle correction rate and the correction loss at the final follow-up were 56.7% and 9.2°, respectively. The average preoperative instrumented level of kyphosis was 8.3°, which significantly improved to 18.6° (P = 0.0003) at the final follow-up. The average percent-predicted forced vital capacity and forced expiratory volume in 1 second were significantly decreased during long-term follow-up measurements (73% and 69%; P = 0.0004 and 0.0016, respectively). However, no patient had complaints related to pulmonary function. The average total SRS-30 score was 4.0. Implant breakage was not observed. All patients, except 1 who required revision surgery, demonstrated solid fusion. Late instrumentation-related bronchial problems were observed in 1 patient who required implant removal and bronchial tube repair, 13 years after the initial surgery. Overall radiographical findings and patient outcome measures of ASF for Lenke 1 MT AIS were satisfactory at an average follow-up of 15 years. ASF provides significant sagittal correction of the main thoracic curve with long-term maintenance of sagittal profiles. Percent-predicted values of forced vital capacity and forced expiratory volume in 1 second were decreased in this cohort; however, no patient had complaints

  13. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  14. Digital mammography screening: average glandular dose and first performance parameters

    International Nuclear Information System (INIS)

    Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.

    2007-01-01

    Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)

  15. Average chewing pattern improvements following Disclusion Time reduction.

    Science.gov (United States)

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  16. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  17. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  18. Assessment of the requirements for DOE's annual report to congress on low-level radioactive waste

    International Nuclear Information System (INIS)

    1987-10-01

    The Low-level Radioactive Waste Policy Amendments Act of 1985 (PL99-240; LLRWPAA) requires the Department of Energy (DOE) to ''submit to Congress on an annual basis a report which: (1) summarizes the progress of low-level waste disposal siting and licensing activities within each compact region, (2) reviews the available volume reduction technologies, their applications, effectiveness, and costs on a per unit volume basis, (3) reviews interim storage facility requirements, costs, and usage, (4) summarizes transportation requirements for such wastes on an inter- and intra-regional basis, (5) summarizes the data on the total amount of low-level waste shipped for disposal on a yearly basis, the proportion of such wastes subjected to volume reduction, the average volume reduction attained,, and the proportion of wastes stored on an interim basis, and (6) projects the interim storage and final disposal volume requirements anticipated for the following year, on a regional basis (Sec. 7(b)).'' This report reviews and assesses what is required for development of the annual report specified in the LLRWPAA. This report addresses each of the subject areas set out in the LLRWPAA

  19. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  20. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  1. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  2. Waste management system requirements document

    International Nuclear Information System (INIS)

    1991-02-01

    This volume defines the top level requirements for the Mined Geologic Disposal System (MGDS). It is designed to be used in conjunction with Volume 1 of the WMSR, General System Requirements. It provides a functional description expanding the requirements allocated to the MGDS in Volume 1 and elaborates on each requirement by providing associated performance criteria as appropriate. Volumes 1 and 4 of the WMSR provide a minimum set of requirements that must be satisfied by the final MGDS design. This document sets forth specific requirements that must be fulfilled. It is not the intent or purpose of this top level document to describe how each requirement is to be satisfied in the final MGDS design. Each subsequent level of the technical document hierarchy must provide further guidance and definition as to how each of these requirements is to be implemented in the design. It is expected that each subsequent level of requirements will be significantly more detailed. Section 2 of this volume provides a functional description of the MGDS. Each function is addressed in terms of requirements, and performance criteria. Section 3 provides a list of controlling documents. Each document cited in a requirement of Chapter 2 is included in this list and is incorporated into this document as a requirement on the final system. The WMSR addresses only federal requirements (i.e., laws, regulations and DOE orders). State and local requirements are not addressed. However, it will be specifically noted at the potentially affected WMSR requirements that there could be additional or more stringent regulations imposed by a state or local requirements or administering agency over the cited federal requirements

  3. Development of Automatic Visceral Fat Volume Calculation Software for CT Volume Data

    Directory of Open Access Journals (Sweden)

    Mitsutaka Nemoto

    2014-01-01

    Full Text Available Objective. To develop automatic visceral fat volume calculation software for computed tomography (CT volume data and to evaluate its feasibility. Methods. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30 in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Results. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. Conclusions. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.

  4. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  5. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  6. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  7. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  8. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  9. The consequences of time averaging for measuring temporal species turnover in the fossil record

    Science.gov (United States)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  10. Human gallbladder pressure and volume

    DEFF Research Database (Denmark)

    Borly, L; Højgaard, L; Grønvall, S

    1996-01-01

    volume with only slight changes in intraluminal pressure (n = 4). Except for the zero drift, this piece of equipment seemed to fulfil the requirements of being able to measure pressure in the GB. In vivo measurements showed a good clinical reproducibility of the method, and also that respiration...... influenced by respiration (n = 8) and the pressure seems to be higher in the sitting position than in the supine position (n = 5). Cystic duct opening pressure was 10.4, 11.2 and 16.8 mmHg (n = 3). Pressure-volume responses showed that the GB up to a certain volume could accommodate increases in intraluminal...... and patient posture influenced the pressure measurements. Further, a GB pressure-volume relationship was demonstrated, and the possibility of a cystic duct opening pressure was described....

  11. Large volume syringe pump extruder for desktop 3D printers

    Directory of Open Access Journals (Sweden)

    Kira Pusch

    2018-04-01

    Full Text Available Syringe pump extruders are required for a wide range of 3D printing applications, including bioprinting, embedded printing, and food printing. However, the mass of the syringe becomes a major challenge for most printing platforms, requiring compromises in speed, resolution and/or volume. To address these issues, we have designed a syringe pump large volume extruder (LVE that is compatible with low-cost, open source 3D printers, and herein demonstrate its performance on a PrintrBot Simple Metal. Key aspects of the LVE include: (1 it is open source and compatible with open source hardware and software, making it inexpensive and widely accessible to the 3D printing community, (2 it utilizes a standard 60 mL syringe as its ink reservoir, effectively increasing print volume of the average bioprinter, (3 it is capable of retraction and high speed movements, and (4 it can print fluids using nozzle diameters as small as 100 μm, enabling the printing of complex shapes/objects when used in conjunction with the freeform reversible embedding of suspended hydrogels (FRESH 3D printing method. Printing performance of the LVE is demonstrated by utilizing alginate as a model biomaterial ink to fabricate parametric CAD models and standard calibration objects. Keywords: Additive manufacturing, 3D bioprinting, Embedded printing, FRESH, Soft materials extrusion

  12. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  13. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  14. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  15. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  16. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  17. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  18. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  19. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  20. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  1. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  2. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  3. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  4. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  5. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  6. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  7. Changes in the air cell volume of artificially incubated ostrich eggs ...

    African Journals Online (AJOL)

    A total of 2160 images of candled, incubated ostrich eggs were digitized to determine the percentage of egg volume occupied by the air cell at different stages of incubation. The air cell on average occupied 2.5% of the volume of fresh eggs. For eggs that hatched successfully, this volume increased to an average of 24.4% ...

  8. METHODOLOGY OF THE DRUGS MARKET VOLUME MODELING ON THE EXAMPLE OF HEMOPHILIA A

    Directory of Open Access Journals (Sweden)

    N. B. Molchanova

    2015-01-01

    Full Text Available Hemophilia A is a serious genetic disease, which may lead to disability of a patient even in early ages without a required therapy. The only one therapeutic approach is a replacement therapy with drugs of bloodcoagulation factor VIII (FVIII. The modeling of coagulation drugs market volume will allow evaluation of the level of patients’ provision with a necessary therapy. Modeling of a “perfect” market of drugs and its comparison with the real one was the purpose of the study. During the modeling of market volume we have used the data about the number of hamophilia A patients on the basis of the federal registry, Russian and international morbidity indices, and the data of a real practice about average consumption of drugs of bloodcoagulation factors and data about the drugs prescription according to the standards and protocols of assistance rendering. According to the standards of care delivery, average annual volume of FVIII drugs consumption amounted to 406 325 244 IU for children and 964 578 678 IU for adults, i.e. an average volume of a “perfect” market is equal to 1 370 903 922 IU for all patients. The market volume is 1.8 times bigger than a real volume of FVIII drugs which, according to the data of IMS marketing agency, amounted to 765 000 000 IU in 2013. The modeling conducted has shown that despite a relatively high patients’ coverage there is a potential for almost double growth.

  9. Status and Habitat Requirements of the White Sturgeon Populations in the Columbia River Downstream from McNary Dam Volume II; Supplemental Papers and Data Documentation, 1986-1992 Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Beamesderfer, Raymond C.; Nigro, Anthony A. [Oregon Dept. of Fish and Wildlife, Clackamas, OR (US)

    1995-01-01

    This is the final report for research on white sturgeon Acipenser transmontanus from 1986--92 and conducted by the National Marine Fisheries Service (NMFS), Oregon Department of Fish and Wildlife (ODFW), US Fish and Wildlife Service (USFWS), and Washington Department of Fisheries (WDF). Findings are presented as a series of papers, each detailing objectives, methods, results, and conclusions for a portion of this research. This volume includes supplemental papers which provide background information needed to support results of the primary investigations addressed in Volume 1. This study addresses measure 903(e)(1) of the Northwest Power Planning Council's 1987 Fish and Wildlife Program that calls for ''research to determine the impact of development and operation of the hydropower system on sturgeon in the Columbia River Basin.'' Study objectives correspond to those of the ''White Sturgeon Research Program Implementation Plan'' developed by BPA and approved by the Northwest Power Planning Council in 1985. Work was conducted on the Columbia River from McNary Dam to the estuary.

  10. The growth of the mean average crossing number of equilateral polygons in confinement

    International Nuclear Information System (INIS)

    Arsuaga, J; Borgo, B; Scharein, R; Diao, Y

    2009-01-01

    The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.

  11. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  12. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  13. Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2010-01-01

    Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.

  14. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  15. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  16. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  17. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  18. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  19. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  20. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  1. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  2. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  3. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  4. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  5. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  6. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  7. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  8. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  9. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  10. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  11. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  12. Fluctuations of trading volume in a stock market

    Science.gov (United States)

    Hong, Byoung Hee; Lee, Kyoung Eun; Hwang, Jun Kyung; Lee, Jae Woo

    2009-03-01

    We consider the probability distribution function of the trading volume and the volume changes in the Korean stock market. The probability distribution function of the trading volume shows double peaks and follows a power law, P(V/)∼( at the tail part of the distribution with α=4.15(4) for the KOSPI (Korea composite Stock Price Index) and α=4.22(2) for the KOSDAQ (Korea Securities Dealers Automated Quotations), where V is the trading volume and is the monthly average value of the trading volume. The second peaks originate from the increasing trends of the average volume. The probability distribution function of the volume changes also follows a power law, P(Vr)∼Vr-β, where Vr=V(t)-V(t-T) and T is a time lag. The exponents β depend on the time lag T. We observe that the exponents β for the KOSDAQ are larger than those for the KOSPI.

  13. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  14. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  15. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  16. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  17. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  18. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  19. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  20. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  1. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  2. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  3. Numerical artifacts in the Generalized Porous Medium Equation: Why harmonic averaging itself is not to blame

    Science.gov (United States)

    Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot

    2018-05-01

    The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.

  4. Application of average adult Japanese voxel phantoms to evaluation of photon specific absorbed fractions

    International Nuclear Information System (INIS)

    Sato, Kaoru; Manabe, Kentaro; Endo, Akira

    2012-01-01

    Average adult Japanese male (JM-103) and female (JF-103) voxel (volume pixel) phantoms newly constructed at the Japan Atomic Energy Agency (JAEA) have average characteristics of body sizes and organ masses in adult Japanese. In JM-103 and JF-103, several organs and tissues were newly modeled for dose assessments based on tissue weighting factors of the 2007 Recommendations of the International Commission on Radiological Protection(ICRP). In this study, SAFs for thyroid, stomach, lungs and lymphatic nodes of JM-103 and JF-103 phantoms were calculated, and were compared with those of other adult Japanese phantoms based on individual medical images. In most cases, differences in SAFs between JM-103, JF-103 and other phantoms were about several tens percent, and was mainly attributed to mass differences of organs, tissues and contents. Therefore, it was concluded that SAFs of JM-103 and JF-103 represent those of average adult Japanese and that the two phantoms are applied to dose assessment for average adult Japanese on the basis of the 2007 Recommendations. (author)

  5. Review and evaluation of the Nuclear Materials Management and Safeguards System (NMMSS). Volume 1. Comparison of DOE nuclear materials information system requirements with NMMSS capabilities and recommendations for NMMSS improvements

    International Nuclear Information System (INIS)

    1984-03-01

    This report documents the result of a review of the Nuclear Materials Management and Safeguards System (NMMSS), a Department of Energy nuclear materials control and accountability data base and information processing system. This review was performed to determine what data are required from the NMMSS, how it is collected, and how it is used. Based upon the review, NMMSS deficiencies and excess capabilities were identified and a draft set of requirements for the nuclear materials information system that the Office of Safeguards and Security (OSS) should be supporting as well as recommendations for attaining that capability

  6. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  7. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  8. Confinement, average forces, and the Ehrenfest theorem for a one ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 80; Issue 5. Confinement ... A free particle moving on the entire real line, which is then permanently confined to a line segment or `a box' (this situation is achieved by taking the limit V 0 → ∞ in a finite well potential). This case is .... Please take note of this change.

  9. Average weighted receiving time in recursive weighted Koch networks

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6 ... Nonlinear Scientific Research Center, Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, People's Republic of China; School of Computer Science and Telecommunication Engineering, Jiangsu University, Zhenjiang, 212013, People's ...

  10. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  11. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  12. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  13. The average number of partons per clan in rapidity intervals in parton showers

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica; Lupia, S. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut; Ugoccioni, R. [Lund Univ. (Sweden). Dept. of Theoretical Physics

    1996-04-01

    The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)

  14. The average number of partons per clan in rapidity intervals in parton showers

    International Nuclear Information System (INIS)

    Giovannini, A.; Lupia, S.; Ugoccioni, R.

    1996-01-01

    The dependence of the average number of partons per clan on virtuality and rapidity variables is analytically predicted in the framework of the Generalized Simplified Parton Shower model, based on the idea that clans are genuine elementary subprocesses. The obtained results are found to be qualitatively consistent with experimental trends. This study extends previous results on the behavior of the average number of clans in virtuality and rapidity and shows how important physical quantities can be calculated analytically in a model based on essentials of QCD allowing local violations of the energy-momentum conservation law, still requiring its global validity. (orig.)

  15. Optimal and fast rotational alignment of volumes with missing data in Fourier space.

    Science.gov (United States)

    Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E

    2013-11-01

    Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Percutaneous Ethanol Sclerotherapy of Symptomatic Nodules Is Effective and Safe in Pregnant Women: A Study of 13 Patients with an Average Follow-Up of 6.8 Years

    Directory of Open Access Journals (Sweden)

    Tamas Solymosi

    2015-01-01

    Full Text Available Background. Because of the increased risk of surgery, thyroid nodules causing compression signs and/or hyperthyroidism are concerning during pregnancy. Patients and Methods. Six patients with nontoxic cystic, four with nontoxic solid, and three with overt hyperthyroidism caused by toxic nodules were treated with percutaneous ethanol injection therapy (PEI. An average of 0.68 mL ethanol per 1 mL nodule volume was administered. Mean number of PEI treatments for patients was 2.9. Success was defined as the shrinkage of the nodule by more than 50% of the pretreatment volume (V0 and the normalization of TSH and FT4 levels. The average V0 was 15.3 mL. Short-term success was measured prior to labor, whereas long-term success was determined during the final follow-up (an average of 6.8 years. Results. The pressure symptoms decreased in all but one patient after PEI and did not worsen until delivery. The PEI was successful in 11 (85% and 7 (54% patients at short-term and long-term follow-up, respectively. Three patients underwent repeat PEI which was successful in 2 patients. Conclusions. PEI is a safe tool and seems to have good short-term results in treating selected symptomatic pregnant patients. Long-term success may require repeat PEI.

  17. Equivalent uniform dose concept evaluated by theoretical dose volume histograms for thoracic irradiation.

    Science.gov (United States)

    Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F

    2007-03-01

    The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.

  18. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  19. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  20. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  1. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  2. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  3. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  4. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  5. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  6. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  7. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  8. Lung volumes: measurement, clinical use, and coding.

    Science.gov (United States)

    Flesch, Judd D; Dine, C Jessica

    2012-08-01

    Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.

  9. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  10. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  11. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  12. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  13. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  14. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  15. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  16. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  17. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  18. Disc volume reduction with percutaneous nucleoplasty in an animal model.

    Directory of Open Access Journals (Sweden)

    Richard Kasch

    Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.

  19. Experimental analysis of fuzzy controlled energy efficient demand controlled ventilation economizer cycle variable air volume air conditioning system

    Directory of Open Access Journals (Sweden)

    Rajagopalan Parameshwaran

    2008-01-01

    Full Text Available In the quest for energy conservative building design, there is now a great opportunity for a flexible and sophisticated air conditioning system capable of addressing better thermal comfort, indoor air quality, and energy efficiency, that are strongly desired. The variable refrigerant volume air conditioning system provides considerable energy savings, cost effectiveness and reduced space requirements. Applications of intelligent control like fuzzy logic controller, especially adapted to variable air volume air conditioning systems, have drawn more interest in recent years than classical control systems. An experimental analysis was performed to investigate the inherent operational characteristics of the combined variable refrigerant volume and variable air volume air conditioning systems under fixed ventilation, demand controlled ventilation, and combined demand controlled ventilation and economizer cycle techniques for two seasonal conditions. The test results of the variable refrigerant volume and variable air volume air conditioning system for each techniques are presented. The test results infer that the system controlled by fuzzy logic methodology and operated under the CO2 based mechanical ventilation scheme, effectively yields 37% and 56% per day of average energy-saving in summer and winter conditions, respectively. Based on the experimental results, the fuzzy based combined system can be considered to be an alternative energy efficient air conditioning scheme, having significant energy-saving potential compared to the conventional constant air volume air conditioning system.

  20. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Volume 2, Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure: Appendices, Final report

    Energy Technology Data Exchange (ETDEWEB)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.

  1. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure, Volume 1, Final report

    Energy Technology Data Exchange (ETDEWEB)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N. [Pacific Northwest Lab., Richland, WA (United States)

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the {prime}978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.

  2. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Volume 2, Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure: Appendices, Final report

    International Nuclear Information System (INIS)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ''green field'' condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities

  3. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure, Volume 1, Final report

    International Nuclear Information System (INIS)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.; McDuffie, P.N.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review and reevaluation of the '978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ''green field'' condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities

  4. The volume of the human knee joint.

    Science.gov (United States)

    Matziolis, Georg; Roehner, Eric; Windisch, Christoph; Wagner, Andreas

    2015-10-01

    Despite its clinical relevance, particularly in septic knee surgery, the volume of the human knee joint has not been established to date. Therefore, the objective of this study was to determine knee joint volume and whether or not it is dependent on sex or body height. Sixty-one consecutive patients (joints) who were due to undergo endoprosthetic joint replacement were enrolled in this prospective study. During the operation, the joint volume was determined by injecting saline solution until a pressure of 200 mmHg was achieved in the joint. The average volume of all knee joints was 131 ± 53 (40-290) ml. The volume was not found to be dependent on sex, but it was dependent on the patients' height (R = 0.312, p = 0.014). This enabled an estimation of the joint volume according to V = 1.6 height - 135. The considerable inter-individual variance of the knee joint volume would suggest that it should be determined or at least estimated according to body height if the joint volume has consequences for the diagnostics or therapy of knee disorders.

  5. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  6. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  7. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  8. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  9. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  10. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  11. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  12. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  13. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  14. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  15. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  16. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  17. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  18. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  19. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  20. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  1. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  2. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  3. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  4. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  5. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  6. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  7. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  8. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  9. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NARCIS (Netherlands)

    Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand

  10. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  11. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  12. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  13. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  14. Long-term, low-level radwaste volume-reduction strategies. Volume 4. Waste disposal costs. Final report

    International Nuclear Information System (INIS)

    Sutherland, A.A.; Adam, J.A.; Rogers, V.C.; Merrell, G.B.

    1984-11-01

    Volume 4 establishes pricing levels at new shallow land burial grounds. The following conclusions can be drawn from the analyses described in the preceding chapters: Application of volume reduction techniques by utilities can have a significant impact on the volumes of wastes going to low-level radioactive waste disposal sites. Using the relative waste stream volumes in NRC81 and the maximum volume reduction ratios provided by Burns and Roe, Inc., it was calculated that if all utilities use maximum volume reduction the rate of waste receipt at disposal sites will be reduced by 40 percent. When a disposal site receives a lower volume of waste its total cost of operation does not decrease by the same proportion. Therefore the average cost for a unit volume of waste received goes up. Whether the disposal site operator knows in advance that he will receive a smaller amount of waste has little influence on the average unit cost ($/ft) of the waste disposed. For the pricing algorithm postulated, the average disposal cost to utilities that volume reduce is relatively independent of whether all utilities practice volume reduction or only a few volume reduce. The general effect of volume reduction by utilities is to reduce their average disposal site costs by a factor of between 1.5 to 2.5. This factor is generally independent of the size of the disposal site. The largest absolute savings in disposal site costs when utilities volume reduce occurs when small disposal sites are involved. This results from the fact that unit costs are higher at small sites. Including in the pricing algorithm a factor that penalizes waste generators who contribute larger amounts of the mobile nuclides 3 H, 14 C, 99 Tc, and 129 I, which may be the subject of site inventory limits, lowers unit disposal costs for utility wastes that contain only small amounts of the nuclides and raises unit costs for other utility wastes

  15. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  16. Volume of the domain visited by N spherical Brownian particles

    International Nuclear Information System (INIS)

    Berezhkovskii, A.M.

    1994-01-01

    The average value and variance of the volume of the domain visited in time t by N spherical Brownian particles starting initially at the same point are presented as quadratures of the solutions of simple diffusion problems of the survival of a point Brownian particle in the presence of one and two spherical traps. As an illustration, explicit time dependences are obtained for the average volume in one and three dimensions

  17. Restricted cell elongation in Arabidopsis hypocotyls is associated with a reduced average pectin esterification level.

    Science.gov (United States)

    Derbyshire, Paul; McCann, Maureen C; Roberts, Keith

    2007-06-17

    Cell elongation is mainly limited by the extensibility of the cell wall. Dicotyledonous primary (growing) cell walls contain cellulose, xyloglucan, pectin and proteins, but little is known about how each polymer class contributes to the cell wall mechanical properties that control extensibility. We present evidence that the degree of pectin methyl-esterification (DE%) limits cell growth, and that a minimum level of about 60% DE is required for normal cell elongation in Arabidopsis hypocotyls. When the average DE% falls below this level, as in two gibberellic acid (GA) mutants ga1-3 and gai, and plants expressing pectin methyl-esterase (PME1) from Aspergillus aculeatus, then hypocotyl elongation is reduced. Low average levels of pectin DE% are associated with reduced cell elongation, implicating PMEs, the enzymes that regulate DE%, in the cell elongation process and in responses to GA. At high average DE% other components of the cell wall limit GA-induced growth.

  18. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  19. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  20. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  1. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  2. Mean nuclear volume

    DEFF Research Database (Denmark)

    Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.

    1999-01-01

    We evaluated the following nine parameters with respect to their prognostic value in females with endometrial cancer: four stereologic parameters [mean nuclear volume (MNV), nuclear volume fraction, nuclear index and mitotic index], the immunohistochemical expression of cancer antigen (CA125...

  3. On the construction of a time base and the elimination of averaging errors in proxy records

    Science.gov (United States)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The

  4. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    International Nuclear Information System (INIS)

    Diamant, A; Ybarra, N; Seuntjens, J; El Naqa, I

    2016-01-01

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  5. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  6. Canadian energy standards : residential energy code requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, K. [SAR Engineering Ltd., Burnaby, BC (Canada)

    2006-09-15

    A survey of residential energy code requirements was discussed. New housing is approximately 13 per cent more efficient than housing built 15 years ago, and more stringent energy efficiency requirements in building codes have contributed to decreased energy use and greenhouse gas (GHG) emissions. However, a survey of residential energy codes across Canada has determined that explicit demands for energy efficiency are currently only present in British Columbia (BC), Manitoba, Ontario and Quebec. The survey evaluated more than 4300 single-detached homes built between 2000 and 2005 using data from the EnerGuide for Houses (EGH) database. House area, volume, airtightness and construction characteristics were reviewed to create archetypes for 8 geographic areas. The survey indicated that in Quebec and the Maritimes, 90 per cent of houses comply with ventilation system requirements of the National Building Code, while compliance in the rest of Canada is much lower. Heat recovery ventilation use is predominant in the Atlantic provinces. Direct-vent or condensing furnaces constitute the majority of installed systems in provinces where natural gas is the primary space heating fuel. Details of Insulation levels for walls, double-glazed windows, and building code insulation standards were also reviewed. It was concluded that if R-2000 levels of energy efficiency were applied, total average energy consumption would be reduced by 36 per cent in Canada. 2 tabs.

  7. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  8. Blood volume studies

    International Nuclear Information System (INIS)

    Lewis, S.M.; Yin, J.A.L.

    1986-01-01

    The use of dilution analysis with such radioisotopes as 51 Cr, 32 P, sup(99m)Tc and sup(113m)In for measuring red cell volume is reviewed briefly. The use of 125 I and 131 I for plasma volume studies is also considered and the subsequent determination of total blood volume discussed, together with the role of the splenic red cell volume. Substantial bibliography. (UK)

  9. THE PREDICTION OF VOID VOLUME IN SUBCOOLED NUCLEATE POOL BOILING

    Energy Technology Data Exchange (ETDEWEB)

    Duke, E. E. [General Dynamics, San Diego, CA (United States)

    1963-11-15

    A three- step equation was developed that adequately describes the average volume of vapor occurring on a horizontal surface due to nucleate pool boiling of subcooled water. Since extensive bubble frequency data are lacking, the data of others were combined with experimental observations to make predictions of void volume at ambient pressure with various degrees of subcooling. (auth)

  10. Using Mobile Device Samples to Estimate Traffic Volumes

    Science.gov (United States)

    2017-12-01

    In this project, TTI worked with StreetLight Data to evaluate a beta version of its traffic volume estimates derived from global positioning system (GPS)-based mobile devices. TTI evaluated the accuracy of average annual daily traffic (AADT) volume :...

  11. Beef steers with average dry matter intake and divergent average daily gain have altered gene expression in the jejunum

    Science.gov (United States)

    The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...

  12. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  13. METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE

    Directory of Open Access Journals (Sweden)

    L. M. Aliomarov

    2015-01-01

    Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.

  14. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  15. Study of the binary mixtures of {monoglyme + (hexane, cyclohexane, octane, dodecane)} by ECM-average and PFP models

    International Nuclear Information System (INIS)

    Rivas, M.A.; Buep, A.H.; Iglesias, T.P.

    2015-01-01

    Highlights: • Polarization of the real mixture is less than that of the ideal mixture. • Molar excess volume does not exert the dominant effect on the polarization of the mixture. • Similar influence of molecular interactions on the behaviour of excess permittivity. • Excess molar volume is more influenced by the interactions than excess permittivity. - Abstract: Excess molar volumes and excess permittivity of binary mixtures involving monoglyme and alkanes, such as n-hexane, cyclohexane, n-octane and n-dodecane, were calculated from density and relative permittivity measurements for the entire composition range at several temperatures (288.15, 298.15 and 308.15) K and atmospheric pressure. The excess permittivity was calculated on the basis of a recent definition considering the ideal volume fraction. Empirical equations for describing the experimental data in terms of temperature and concentration are given. The experimental values of permittivity have been compared with those estimated by well-known models from literature. The results have indicated that better predictions are obtained when the volume change on mixing is incorporated in these calculations. The contribution of interactions to the excess permittivity was analysed by means of the ECM-average model. The Prigogine–Flory–Patterson (PFP) theory of the thermodynamics of solutions was used to shed light on the contribution of interactions to the excess molar volume. The work concludes with an interpretation of the information given by the theoretical models and the behaviour of both excess magnitudes

  16. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  17. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  18. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  19. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  20. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.